A: Usted puede conseguir más monedas y gemas en la arena de voleibol ganando partidos, abriendo cofres, Aquí está la continuación del artículo:
-
-Q: ¿Cómo puedo obtener más potenciadores en la Arena de Voleibol?
-A: Puedes obtener más potenciadores en la Arena de Voleibol al desbloquearlos con monedas o gemas, o al encontrarlos al azar durante el partido. También puedes obtener más potenciadores viendo anuncios o completando logros.
-Q: ¿Cómo puedo cambiar mi carácter o pelota en la Arena de Voleibol?
-A: Puede cambiar su personaje o pelota en la Arena de Voleibol yendo a la pestaña Tienda en el menú principal y seleccionando el personaje o pelota que desea usar. También puedes cambiar tu personaje o bola antes del partido tocando sus iconos en la parte inferior de la pantalla.
-Q: ¿Cómo puedo contactar a los desarrolladores de Volleyball Arena?
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Caja Monster.md b/spaces/Benson/text-generation/Examples/Caja Monster.md
deleted file mode 100644
index 07e26385d4d9e3357390a3c367cf0e20b63c3c23..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Caja Monster.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-¿Qué es una caja monstruo y por qué necesita uno
-¿Alguna vez has oído hablar de una caja monstruo? Si no, te estás perdiendo una gran manera de disfrutar de los juegos o invertir en metales preciosos. Una caja de monstruos es un término que puede referirse a diferentes cosas dependiendo del contexto, pero todos tienen una cosa en común: son cajas que contienen monstruos. En este artículo, explicaremos qué es una caja monstruosa, qué tipos de cajas monstruosas existen, qué beneficios ofrecen y cómo puedes obtener una por ti mismo.
- Definición y Tipos de Monster Box
-Una caja de monstruos es una caja que contiene monstruos. Suena simple, ¿verdad? ¿Pero de qué clase de monstruos estamos hablando? Bueno, hay dos tipos principales de cajas monstruosas que debes conocer: cajas monstruosas en juegos y cajas monstruosas en metales preciosos.
-Caja Monster Download ✦ https://bltlly.com/2v6MHP
- Caja de monstruos en juegos
-Una caja de monstruos en juegos es una caja que contiene monstruos digitales que puedes capturar, recolectar y usar para batallas. Estos monstruos suelen ser lindos, coloridos y tienen diferentes habilidades y personalidades. Un ejemplo de un juego que cuenta con cajas de monstruos es Monster Box , un juego casual donde capturas monstruos en tus cápsulas y los usas para defenderte. También puedes aceptar desafíos de otros entrenadores, recompensar a tus monstruos con juegos y golosinas, y crear tu mejor equipo.
- Caja de monstruos en metales preciosos
-Una caja monstruo en metales preciosos es una caja que contiene monedas físicas o barras hechas de oro, plata, platino u otros metales. Estas monedas o barras son generalmente acuñadas por las mentas oficiales como la U.S. Mint, la Royal Canadian Mint, la Austrian Mint, y más. Un ejemplo de un producto que viene en una caja monstruo es el American Silver Eagle , una moneda de plata que es la moneda oficial de lingotes de plata de los Estados Unidos. Una caja monstruosa de águilas plateadas americanas contiene 500 monedas en 25 tubos de 20 monedas cada uno.
- Beneficios de tener una caja de monstruos
-
- Caja de monstruos para juegos
-Si eres un fanático de los juegos, especialmente los juegos casuales que son divertidos y fáciles de jugar, tener una caja de monstruos puede proporcionarte horas de entretenimiento y disfrute. Estos son algunos de los beneficios de tener una caja monstruo para juegos:
- Recoger y monstruos de batalla
-Una de las principales atracciones de tener una caja de monstruos para los juegos es que usted puede recoger y monstruos de batalla. Puedes capturar diferentes tipos de monstruos con diferentes habilidades y características, como fuego, agua, tierra, aire, luz, oscuridad, etc. También puedes personalizar tus monstruos con diferentes atuendos, accesorios y nombres. A continuación, puede utilizar sus monstruos para luchar contra otros entrenadores o enemigos en varios modos y arenas.
- Desafía y recompensa a tus monstruos
-Otro beneficio de tener una caja de monstruos para jugar es que puedes desafiar y recompensar a tus monstruos. Puedes aceptar misiones y misiones de otros entrenadores o PNJ (personajes no jugadores) que pondrán a prueba tus habilidades y estrategia. También puedes recompensar a tus monstruos con juegos y golosinas que los harán felices y leales. También puedes desbloquear logros y recompensas que mejorarán tu experiencia de juego.
- Crea tu mejor equipo
-Un beneficio final de tener una caja monstruo para juegos es que puedes crear tu mejor equipo. Puedes mezclar y combinar diferentes monstruos para formar un equipo equilibrado y potente. También puedes mejorar tus monstruos con objetos y habilidades que aumentarán su rendimiento. También puedes intercambiar y compartir tus monstruos con otros jugadores para ampliar tu colección y hacer nuevos amigos.
- Caja de monstruos para metales preciosos
-Si eres un fan de invertir en metales preciosos, especialmente monedas o barras que son acuñadas por mentas oficiales, entonces tener una caja monstruo puede proporcionarte seguridad y valor. Estos son algunos de los beneficios de tener una caja monstruosa para metales preciosos:
-
- Almacenar y proteger sus metales
-
- Ahorre dinero y tiempo
-Otro beneficio de tener una caja monstruo para metales preciosos es que puede ahorrar dinero y tiempo. Comprar una caja monstruo de monedas o barras suele ser más barato que comprar piezas individuales, ya que puede obtener un descuento a granel o una prima más baja. También puede ahorrar tiempo ordenando en línea o visitando a un distribuidor que tiene la caja monstruo en stock, en lugar de buscar diferentes productos o mentas.
- Aumente el valor de su inversión
-Un beneficio final de tener una caja monstruosa para metales preciosos es que puede aumentar su valor de inversión. Una caja monstruosa de monedas o barras es un activo tangible que tiene un valor intrínseco y un suministro limitado. El valor de sus metales depende del precio de mercado, la demanda y la oferta, la rareza y la calidad, y la menta y el diseño. También puede beneficiarse de la apreciación, diversificación, liquidez y cobertura de sus metales contra la inflación, la devaluación de la moneda o la inestabilidad económica.
- Cómo obtener una caja de monstruos
-Ahora que sabes lo que es una caja monstruo y qué beneficios ofrece, es posible que te estés preguntando cómo conseguir una para ti. Bueno, el proceso es diferente dependiendo de si quieres una caja monstruo para juegos o metales preciosos.
- Caja de monstruos para juegos
-Si quieres una caja monstruo para juegos, aquí están los pasos que debes seguir:
- Descargar la aplicación o jugar en línea
-El primer paso es descargar la aplicación o jugar en línea el juego que cuenta con cajas de monstruos. Por ejemplo, puede descargar Kongregate . También puedes ver otros juegos que tienen características similares, como Pokémon Go , Monster Legends , o APMEX , JM Bullion , o Liberty Coin , Bullion Exchanges , o BB", severity_fatal, description_unknown_ca)
- msg_len = len(msg)
- record_type_alert = 0x15
- record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg
- return record
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/anchor_generator.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/anchor_generator.py
deleted file mode 100644
index c0ae9bfc61f5c8483727dded627dfd8addc19cd0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/anchor_generator.py
+++ /dev/null
@@ -1,365 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import copy
-import math
-from typing import List
-import torch
-from torch import nn
-
-from detectron2.layers import ShapeSpec
-from detectron2.structures import Boxes, RotatedBoxes
-from detectron2.utils.registry import Registry
-
-ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR")
-ANCHOR_GENERATOR_REGISTRY.__doc__ = """
-Registry for modules that creates object detection anchors for feature maps.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-"""
-
-
-class BufferList(nn.Module):
- """
- Similar to nn.ParameterList, but for buffers
- """
-
- def __init__(self, buffers=None):
- super(BufferList, self).__init__()
- if buffers is not None:
- self.extend(buffers)
-
- def extend(self, buffers):
- offset = len(self)
- for i, buffer in enumerate(buffers):
- self.register_buffer(str(offset + i), buffer)
- return self
-
- def __len__(self):
- return len(self._buffers)
-
- def __iter__(self):
- return iter(self._buffers.values())
-
-
-def _create_grid_offsets(size, stride, offset, device):
- grid_height, grid_width = size
- shifts_x = torch.arange(
- offset * stride, grid_width * stride, step=stride, dtype=torch.float32, device=device
- )
- shifts_y = torch.arange(
- offset * stride, grid_height * stride, step=stride, dtype=torch.float32, device=device
- )
-
- shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)
- shift_x = shift_x.reshape(-1)
- shift_y = shift_y.reshape(-1)
- return shift_x, shift_y
-
-
-@ANCHOR_GENERATOR_REGISTRY.register()
-class DefaultAnchorGenerator(nn.Module):
- """
- For a set of image sizes and feature maps, computes a set of anchors.
- """
-
- def __init__(self, cfg, input_shape: List[ShapeSpec]):
- super().__init__()
- # fmt: off
- sizes = cfg.MODEL.ANCHOR_GENERATOR.SIZES
- aspect_ratios = cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS
- self.strides = [x.stride for x in input_shape]
- self.offset = cfg.MODEL.ANCHOR_GENERATOR.OFFSET
-
- assert 0.0 <= self.offset < 1.0, self.offset
-
- # fmt: on
- """
- sizes (list[list[int]]): sizes[i] is the list of anchor sizes to use
- for the i-th feature map. If len(sizes) == 1, then the same list of
- anchor sizes, given by sizes[0], is used for all feature maps. Anchor
- sizes are given in absolute lengths in units of the input image;
- they do not dynamically scale if the input image size changes.
- aspect_ratios (list[list[float]]): aspect_ratios[i] is the list of
- anchor aspect ratios to use for the i-th feature map. If
- len(aspect_ratios) == 1, then the same list of anchor aspect ratios,
- given by aspect_ratios[0], is used for all feature maps.
- strides (list[int]): stride of each input feature.
- """
-
- self.num_features = len(self.strides)
- self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios)
-
- def _calculate_anchors(self, sizes, aspect_ratios):
- # If one size (or aspect ratio) is specified and there are multiple feature
- # maps, then we "broadcast" anchors of that single size (or aspect ratio)
- # over all feature maps.
- if len(sizes) == 1:
- sizes *= self.num_features
- if len(aspect_ratios) == 1:
- aspect_ratios *= self.num_features
- assert self.num_features == len(sizes)
- assert self.num_features == len(aspect_ratios)
-
- cell_anchors = [
- self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios)
- ]
-
- return BufferList(cell_anchors)
-
- @property
- def box_dim(self):
- """
- Returns:
- int: the dimension of each anchor box.
- """
- return 4
-
- @property
- def num_cell_anchors(self):
- """
- Returns:
- list[int]: Each int is the number of anchors at every pixel
- location, on that feature map.
- For example, if at every pixel we use anchors of 3 aspect
- ratios and 5 sizes, the number of anchors is 15.
- (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config)
-
- In standard RPN models, `num_cell_anchors` on every feature map is the same.
- """
- return [len(cell_anchors) for cell_anchors in self.cell_anchors]
-
- def grid_anchors(self, grid_sizes):
- anchors = []
- for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors):
- shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device)
- shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1)
-
- anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4))
-
- return anchors
-
- def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)):
- """
- Generate a tensor storing anchor boxes, which are continuous geometric rectangles
- centered on one feature map point sample. We can later build the set of anchors
- for the entire feature map by tiling these tensors; see `meth:grid_anchors`.
-
- Args:
- sizes (tuple[float]): Absolute size (i.e. sqrt of area) of the anchors in the units
- of pixels on the input image (the input received by the network, after
- undergoing necessary scaling).
- aspect_ratios (tuple[float]]): Aspect ratios of the boxes computed as box
- height / width.
-
- Returns:
- Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes
- in XYXY format.
- """
-
- # This is different from the anchor generator defined in the original Faster R-CNN
- # code or Detectron. They yield the same AP, however the old version defines cell
- # anchors in a less natural way with a shift relative to the feature grid and
- # quantization that results in slightly different sizes for different aspect ratios.
- # See also https://github.com/facebookresearch/Detectron/issues/227
-
- anchors = []
- for size in sizes:
- area = size ** 2.0
- for aspect_ratio in aspect_ratios:
- # s * s = w * h
- # a = h / w
- # ... some algebra ...
- # w = sqrt(s * s / a)
- # h = a * w
- w = math.sqrt(area / aspect_ratio)
- h = aspect_ratio * w
- x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0
- anchors.append([x0, y0, x1, y1])
- return torch.tensor(anchors)
-
- def forward(self, features):
- """
- Args:
- features (list[Tensor]): list of backbone feature maps on which to generate anchors.
-
- Returns:
- list[list[Boxes]]: a list of #image elements. Each is a list of #feature level Boxes.
- The Boxes contains anchors of this image on the specific feature level.
- """
- num_images = len(features[0])
- grid_sizes = [feature_map.shape[-2:] for feature_map in features]
- anchors_over_all_feature_maps = self.grid_anchors(grid_sizes)
-
- anchors_in_image = []
- for anchors_per_feature_map in anchors_over_all_feature_maps:
- boxes = Boxes(anchors_per_feature_map)
- anchors_in_image.append(boxes)
-
- anchors = [copy.deepcopy(anchors_in_image) for _ in range(num_images)]
- return anchors
-
-
-@ANCHOR_GENERATOR_REGISTRY.register()
-class RotatedAnchorGenerator(nn.Module):
- """
- The anchor generator used by Rotated RPN (RRPN).
- """
-
- def __init__(self, cfg, input_shape: List[ShapeSpec]):
- super().__init__()
- # fmt: off
- sizes = cfg.MODEL.ANCHOR_GENERATOR.SIZES
- aspect_ratios = cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS
- angles = cfg.MODEL.ANCHOR_GENERATOR.ANGLES
- self.strides = [x.stride for x in input_shape]
- self.offset = cfg.MODEL.ANCHOR_GENERATOR.OFFSET
-
- assert 0.0 <= self.offset < 1.0, self.offset
-
- # fmt: on
-
- self.num_features = len(self.strides)
- self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles, self.strides)
-
- def _calculate_anchors(self, sizes, aspect_ratios, angles, feature_strides):
- """
- Args:
- sizes (list[list[int]]): sizes[i] is the list of anchor sizes to use
- for the i-th feature map. If len(sizes) == 1, then the same list of
- anchor sizes, given by sizes[0], is used for all feature maps. Anchor
- sizes are given in absolute lengths in units of the input image;
- they do not dynamically scale if the input image size changes.
- aspect_ratios (list[list[float]]): aspect_ratios[i] is the list of
- anchor aspect ratios to use for the i-th feature map. If
- len(aspect_ratios) == 1, then the same list of anchor aspect ratios,
- given by aspect_ratios[0], is used for all feature maps.
- angles (list[list[float]]): angles[i] is the list of
- anchor angles to use for the i-th feature map. If
- len(angles) == 1, then the same list of anchor angles,
- given by angles[0], is used for all feature maps.
- feature_strides (list[number]): list of feature map strides (with respect
- to the input image) for each input feature map.
- """
-
- # If one size (or aspect ratio) is specified and there are multiple feature
- # maps, then we "broadcast" anchors of that single size
- # (or aspect ratio/angle) over all feature maps.
-
- if len(sizes) == 1:
- sizes *= self.num_features
- if len(aspect_ratios) == 1:
- aspect_ratios *= self.num_features
- if len(angles) == 1:
- angles *= self.num_features
- assert self.num_features == len(sizes)
- assert self.num_features == len(aspect_ratios)
- assert self.num_features == len(angles)
-
- cell_anchors = [
- self.generate_cell_anchors(size, aspect_ratio, angle).float()
- for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles)
- ]
-
- return BufferList(cell_anchors)
-
- @property
- def box_dim(self):
- """
- Returns:
- int: the dimension of each anchor box.
- """
- return 5
-
- @property
- def num_cell_anchors(self):
- """
- Returns:
- list[int]: Each int is the number of anchors at every pixel
- location, on that feature map.
- For example, if at every pixel we use anchors of 3 aspect
- ratios, 2 sizes and 5 angles, the number of anchors is 30.
- (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS
- and ANCHOR_GENERATOR.ANGLES in config)
-
- In standard RRPN models, `num_cell_anchors` on every feature map is the same.
- """
- return [len(cell_anchors) for cell_anchors in self.cell_anchors]
-
- def grid_anchors(self, grid_sizes):
- anchors = []
- for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors):
- shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device)
- zeros = torch.zeros_like(shift_x)
- shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1)
-
- anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5))
-
- return anchors
-
- def generate_cell_anchors(
- self,
- sizes=(32, 64, 128, 256, 512),
- aspect_ratios=(0.5, 1, 2),
- angles=(-90, -60, -30, 0, 30, 60, 90),
- ):
- """
- Generate a tensor storing anchor boxes, which are continuous geometric rectangles
- centered on one feature map point sample. We can later build the set of anchors
- for the entire feature map by tiling these tensors; see `meth:grid_anchors`.
-
- Args:
- sizes (tuple[float]): Absolute size of the anchors in the units of the input
- image (the input received by the network, after undergoing necessary scaling).
- The absolute size is given as the side length of a box.
- aspect_ratios (tuple[float]]): Aspect ratios of the boxes computed as box
- height / width.
- angles (tuple[float]]): Angles of boxes indicating how many degrees
- the boxes are rotated counter-clockwise.
-
- Returns:
- Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5)
- storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format.
- """
- anchors = []
- for size in sizes:
- area = size ** 2.0
- for aspect_ratio in aspect_ratios:
- # s * s = w * h
- # a = h / w
- # ... some algebra ...
- # w = sqrt(s * s / a)
- # h = a * w
- w = math.sqrt(area / aspect_ratio)
- h = aspect_ratio * w
- anchors.extend([0, 0, w, h, a] for a in angles)
-
- return torch.tensor(anchors)
-
- def forward(self, features):
- """
- Args:
- features (list[Tensor]): list of backbone feature maps on which to generate anchors.
-
- Returns:
- list[list[RotatedBoxes]]:
- a list of #image elements. Each is a list of #feature level RotatedBoxes.
- The RotatedBoxes contains anchors of this image on the specific feature level.
- """
- num_images = len(features[0])
- grid_sizes = [feature_map.shape[-2:] for feature_map in features]
- anchors_over_all_feature_maps = self.grid_anchors(grid_sizes)
-
- anchors_in_image = []
- for anchors_per_feature_map in anchors_over_all_feature_maps:
- boxes = RotatedBoxes(anchors_per_feature_map)
- anchors_in_image.append(boxes)
-
- anchors = [copy.deepcopy(anchors_in_image) for _ in range(num_images)]
- return anchors
-
-
-def build_anchor_generator(cfg, input_shape):
- """
- Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`.
- """
- anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME
- return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/dev/run_inference_tests.sh b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/dev/run_inference_tests.sh
deleted file mode 100644
index 34f47d5a07a90c411e830c98a346845fa618f836..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/dev/run_inference_tests.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-BIN="python train_net.py"
-OUTPUT="inference_test_output"
-NUM_GPUS=2
-IMS_PER_GPU=2
-IMS_PER_BATCH=$(( NUM_GPUS * IMS_PER_GPU ))
-
-CFG_LIST=( "${@:1}" )
-
-if [ ${#CFG_LIST[@]} -eq 0 ]; then
- CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml )
-fi
-
-echo "========================================================================"
-echo "Configs to run:"
-echo "${CFG_LIST[@]}"
-echo "========================================================================"
-
-for cfg in "${CFG_LIST[@]}"; do
- echo "========================================================================"
- echo "Running $cfg ..."
- echo "========================================================================"
- $BIN \
- --eval-only \
- --num-gpus $NUM_GPUS \
- --config-file "$cfg" \
- OUTPUT_DIR "$OUTPUT" \
- SOLVER.IMS_PER_BATCH $IMS_PER_BATCH
- rm -rf $OUTPUT
-done
-
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/config.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/config.py
deleted file mode 100644
index 44479f211811bd4060c6afef9ed86791b0dcd0d4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/config.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from detectron2.config import CfgNode as CN
-
-
-def add_tensormask_config(cfg):
- """
- Add config for TensorMask.
- """
- cfg.MODEL.TENSOR_MASK = CN()
-
- # Anchor parameters
- cfg.MODEL.TENSOR_MASK.IN_FEATURES = ["p2", "p3", "p4", "p5", "p6", "p7"]
-
- # Convolutions to use in the towers
- cfg.MODEL.TENSOR_MASK.NUM_CONVS = 4
-
- # Number of foreground classes.
- cfg.MODEL.TENSOR_MASK.NUM_CLASSES = 80
- # Channel size for the classification tower
- cfg.MODEL.TENSOR_MASK.CLS_CHANNELS = 256
-
- cfg.MODEL.TENSOR_MASK.SCORE_THRESH_TEST = 0.05
- # Only the top (1000 * #levels) candidate boxes across all levels are
- # considered jointly during test (to improve speed)
- cfg.MODEL.TENSOR_MASK.TOPK_CANDIDATES_TEST = 6000
- cfg.MODEL.TENSOR_MASK.NMS_THRESH_TEST = 0.5
-
- # Box parameters
- # Channel size for the box tower
- cfg.MODEL.TENSOR_MASK.BBOX_CHANNELS = 128
- # Weights on (dx, dy, dw, dh)
- cfg.MODEL.TENSOR_MASK.BBOX_REG_WEIGHTS = (1.5, 1.5, 0.75, 0.75)
-
- # Loss parameters
- cfg.MODEL.TENSOR_MASK.FOCAL_LOSS_GAMMA = 3.0
- cfg.MODEL.TENSOR_MASK.FOCAL_LOSS_ALPHA = 0.3
-
- # Mask parameters
- # Channel size for the mask tower
- cfg.MODEL.TENSOR_MASK.MASK_CHANNELS = 128
- # Mask loss weight
- cfg.MODEL.TENSOR_MASK.MASK_LOSS_WEIGHT = 2.0
- # weight on positive pixels within the mask
- cfg.MODEL.TENSOR_MASK.POSITIVE_WEIGHT = 1.5
- # Whether to predict in the aligned representation
- cfg.MODEL.TENSOR_MASK.ALIGNED_ON = False
- # Whether to use the bipyramid architecture
- cfg.MODEL.TENSOR_MASK.BIPYRAMID_ON = False
diff --git a/spaces/ChenyangSi/FreeU/README.md b/spaces/ChenyangSi/FreeU/README.md
deleted file mode 100644
index 04908d93f7418d405bb05ed9a7a04461e2d366a4..0000000000000000000000000000000000000000
--- a/spaces/ChenyangSi/FreeU/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: FreeU
-emoji: 🐠
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.44.0
-app_file: app.py
-pinned: false
-hf_oauth: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CofAI/chat/client/css/button.css b/spaces/CofAI/chat/client/css/button.css
deleted file mode 100644
index 5f604a8460d048458249f78be9dc544ade84801e..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/css/button.css
+++ /dev/null
@@ -1,26 +0,0 @@
-.button {
- display: flex;
- padding: 8px 12px;
- align-items: center;
- justify-content: center;
- border: 1px solid var(--conversations);
- border-radius: var(--border-radius-1);
- width: 100%;
- background: transparent;
- cursor: pointer;
-}
-
-.button span {
- color: var(--colour-3);
- font-size: 0.875rem;
-}
-
-.button i::before {
- margin-right: 8px;
-}
-
-@media screen and (max-width: 990px) {
- .button span {
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/cldm.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/cldm.py
deleted file mode 100644
index 0b3ac7a575cf4933fc14dfc15dd3cca41cb3f3e8..0000000000000000000000000000000000000000
--- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/cldm.py
+++ /dev/null
@@ -1,435 +0,0 @@
-import einops
-import torch
-import torch as th
-import torch.nn as nn
-
-from ldm.modules.diffusionmodules.util import (
- conv_nd,
- linear,
- zero_module,
- timestep_embedding,
-)
-
-from einops import rearrange, repeat
-from torchvision.utils import make_grid
-from ldm.modules.attention import SpatialTransformer
-from ldm.modules.diffusionmodules.openaimodel import UNetModel, TimestepEmbedSequential, ResBlock, Downsample, AttentionBlock
-from ldm.models.diffusion.ddpm import LatentDiffusion
-from ldm.util import log_txt_as_img, exists, instantiate_from_config
-from ldm.models.diffusion.ddim import DDIMSampler
-
-
-class ControlledUnetModel(UNetModel):
- def forward(self, x, timesteps=None, context=None, control=None, only_mid_control=False, **kwargs):
- hs = []
- with torch.no_grad():
- t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)
- emb = self.time_embed(t_emb)
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb, context)
- hs.append(h)
- h = self.middle_block(h, emb, context)
-
- if control is not None:
- h += control.pop()
-
- for i, module in enumerate(self.output_blocks):
- if only_mid_control or control is None:
- h = torch.cat([h, hs.pop()], dim=1)
- else:
- h = torch.cat([h, hs.pop() + control.pop()], dim=1)
- h = module(h, emb, context)
-
- h = h.type(x.dtype)
- return self.out(h)
-
-
-class ControlNet(nn.Module):
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- hint_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=-1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- use_spatial_transformer=False, # custom transformer support
- transformer_depth=1, # custom transformer support
- context_dim=None, # custom transformer support
- n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model
- legacy=True,
- disable_self_attentions=None,
- num_attention_blocks=None,
- disable_middle_self_attn=False,
- use_linear_in_transformer=False,
- ):
- super().__init__()
- if use_spatial_transformer:
- assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'
-
- if context_dim is not None:
- assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'
- from omegaconf.listconfig import ListConfig
- if type(context_dim) == ListConfig:
- context_dim = list(context_dim)
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- if num_heads == -1:
- assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'
-
- if num_head_channels == -1:
- assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'
-
- self.dims = dims
- self.image_size = image_size
- self.in_channels = in_channels
- self.model_channels = model_channels
- if isinstance(num_res_blocks, int):
- self.num_res_blocks = len(channel_mult) * [num_res_blocks]
- else:
- if len(num_res_blocks) != len(channel_mult):
- raise ValueError("provide num_res_blocks either as an int (globally constant) or "
- "as a list/tuple (per-level) with the same length as channel_mult")
- self.num_res_blocks = num_res_blocks
- if disable_self_attentions is not None:
- # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not
- assert len(disable_self_attentions) == len(channel_mult)
- if num_attention_blocks is not None:
- assert len(num_attention_blocks) == len(self.num_res_blocks)
- assert all(map(lambda i: self.num_res_blocks[i] >= num_attention_blocks[i], range(len(num_attention_blocks))))
- print(f"Constructor of UNetModel received num_attention_blocks={num_attention_blocks}. "
- f"This option has LESS priority than attention_resolutions {attention_resolutions}, "
- f"i.e., in cases where num_attention_blocks[i] > 0 but 2**i not in attention_resolutions, "
- f"attention will still not be set.")
-
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
- self.predict_codebook_ids = n_embed is not None
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- self.input_blocks = nn.ModuleList(
- [
- TimestepEmbedSequential(
- conv_nd(dims, in_channels, model_channels, 3, padding=1)
- )
- ]
- )
- self.zero_convs = nn.ModuleList([self.make_zero_conv(model_channels)])
-
- self.input_hint_block = TimestepEmbedSequential(
- conv_nd(dims, hint_channels, 16, 3, padding=1),
- nn.SiLU(),
- conv_nd(dims, 16, 16, 3, padding=1),
- nn.SiLU(),
- conv_nd(dims, 16, 32, 3, padding=1, stride=2),
- nn.SiLU(),
- conv_nd(dims, 32, 32, 3, padding=1),
- nn.SiLU(),
- conv_nd(dims, 32, 96, 3, padding=1, stride=2),
- nn.SiLU(),
- conv_nd(dims, 96, 96, 3, padding=1),
- nn.SiLU(),
- conv_nd(dims, 96, 256, 3, padding=1, stride=2),
- nn.SiLU(),
- zero_module(conv_nd(dims, 256, model_channels, 3, padding=1))
- )
-
- self._feature_size = model_channels
- input_block_chans = [model_channels]
- ch = model_channels
- ds = 1
- for level, mult in enumerate(channel_mult):
- for nr in range(self.num_res_blocks[level]):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=mult * model_channels,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = mult * model_channels
- if ds in attention_resolutions:
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- # num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- if exists(disable_self_attentions):
- disabled_sa = disable_self_attentions[level]
- else:
- disabled_sa = False
-
- if not exists(num_attention_blocks) or nr < num_attention_blocks[level]:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- ) if not use_spatial_transformer else SpatialTransformer(
- ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,
- disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer,
- use_checkpoint=use_checkpoint
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self.zero_convs.append(self.make_zero_conv(ch))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- self.zero_convs.append(self.make_zero_conv(ch))
- ds *= 2
- self._feature_size += ch
-
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- # num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- ) if not use_spatial_transformer else SpatialTransformer( # always uses a self-attn
- ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim,
- disable_self_attn=disable_middle_self_attn, use_linear=use_linear_in_transformer,
- use_checkpoint=use_checkpoint
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self.middle_block_out = self.make_zero_conv(ch)
- self._feature_size += ch
-
- def make_zero_conv(self, channels):
- return TimestepEmbedSequential(zero_module(conv_nd(self.dims, channels, channels, 1, padding=0)))
-
- def forward(self, x, hint, timesteps, context, **kwargs):
- t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)
- emb = self.time_embed(t_emb)
-
- guided_hint = self.input_hint_block(hint, emb, context)
-
- outs = []
-
- h = x.type(self.dtype)
- for module, zero_conv in zip(self.input_blocks, self.zero_convs):
- if guided_hint is not None:
- h = module(h, emb, context)
- h += guided_hint
- guided_hint = None
- else:
- h = module(h, emb, context)
- outs.append(zero_conv(h, emb, context))
-
- h = self.middle_block(h, emb, context)
- outs.append(self.middle_block_out(h, emb, context))
-
- return outs
-
-
-class ControlLDM(LatentDiffusion):
-
- def __init__(self, control_stage_config, control_key, only_mid_control, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.control_model = instantiate_from_config(control_stage_config)
- self.control_key = control_key
- self.only_mid_control = only_mid_control
- self.control_scales = [1.0] * 13
-
- @torch.no_grad()
- def get_input(self, batch, k, bs=None, *args, **kwargs):
- x, c = super().get_input(batch, self.first_stage_key, *args, **kwargs)
- control = batch[self.control_key]
- if bs is not None:
- control = control[:bs]
- control = control.to(self.device)
- control = einops.rearrange(control, 'b h w c -> b c h w')
- control = control.to(memory_format=torch.contiguous_format).float()
- return x, dict(c_crossattn=[c], c_concat=[control])
-
- def apply_model(self, x_noisy, t, cond, *args, **kwargs):
- assert isinstance(cond, dict)
- diffusion_model = self.model.diffusion_model
-
- cond_txt = torch.cat(cond['c_crossattn'], 1)
-
- if cond['c_concat'] is None:
- eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=None, only_mid_control=self.only_mid_control)
- else:
- control = self.control_model(x=x_noisy, hint=torch.cat(cond['c_concat'], 1), timesteps=t, context=cond_txt)
- control = [c * scale for c, scale in zip(control, self.control_scales)]
- eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=control, only_mid_control=self.only_mid_control)
-
- return eps
-
- @torch.no_grad()
- def get_unconditional_conditioning(self, N):
- return self.get_learned_conditioning([""] * N)
-
- @torch.no_grad()
- def log_images(self, batch, N=4, n_row=2, sample=False, ddim_steps=50, ddim_eta=0.0, return_keys=None,
- quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True,
- plot_diffusion_rows=False, unconditional_guidance_scale=9.0, unconditional_guidance_label=None,
- use_ema_scope=True,
- **kwargs):
- use_ddim = ddim_steps is not None
-
- log = dict()
- z, c = self.get_input(batch, self.first_stage_key, bs=N)
- c_cat, c = c["c_concat"][0][:N], c["c_crossattn"][0][:N]
- N = min(z.shape[0], N)
- n_row = min(z.shape[0], n_row)
- log["reconstruction"] = self.decode_first_stage(z)
- log["control"] = c_cat * 2.0 - 1.0
- log["conditioning"] = log_txt_as_img((512, 512), batch[self.cond_stage_key], size=16)
-
- if plot_diffusion_rows:
- # get diffusion row
- diffusion_row = list()
- z_start = z[:n_row]
- for t in range(self.num_timesteps):
- if t % self.log_every_t == 0 or t == self.num_timesteps - 1:
- t = repeat(torch.tensor([t]), '1 -> b', b=n_row)
- t = t.to(self.device).long()
- noise = torch.randn_like(z_start)
- z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise)
- diffusion_row.append(self.decode_first_stage(z_noisy))
-
- diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W
- diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w')
- diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w')
- diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0])
- log["diffusion_row"] = diffusion_grid
-
- if sample:
- # get denoise row
- samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta)
- x_samples = self.decode_first_stage(samples)
- log["samples"] = x_samples
- if plot_denoise_rows:
- denoise_grid = self._get_denoise_row_from_list(z_denoise_row)
- log["denoise_row"] = denoise_grid
-
- if unconditional_guidance_scale > 1.0:
- uc_cross = self.get_unconditional_conditioning(N)
- uc_cat = c_cat # torch.zeros_like(c_cat)
- uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]}
- samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]},
- batch_size=N, ddim=use_ddim,
- ddim_steps=ddim_steps, eta=ddim_eta,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=uc_full,
- )
- x_samples_cfg = self.decode_first_stage(samples_cfg)
- log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg
-
- return log
-
- @torch.no_grad()
- def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs):
- ddim_sampler = DDIMSampler(self)
- b, c, h, w = cond["c_concat"][0].shape
- shape = (self.channels, h // 8, w // 8)
- samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, shape, cond, verbose=False, **kwargs)
- return samples, intermediates
-
- def configure_optimizers(self):
- lr = self.learning_rate
- params = list(self.control_model.parameters())
- if not self.sd_locked:
- params += list(self.model.diffusion_model.output_blocks.parameters())
- params += list(self.model.diffusion_model.out.parameters())
- opt = torch.optim.AdamW(params, lr=lr)
- return opt
-
- def low_vram_shift(self, is_diffusing):
- if is_diffusing:
- self.model = self.model.cuda()
- self.control_model = self.control_model.cuda()
- self.first_stage_model = self.first_stage_model.cpu()
- self.cond_stage_model = self.cond_stage_model.cpu()
- else:
- self.model = self.model.cpu()
- self.control_model = self.control_model.cpu()
- self.first_stage_model = self.first_stage_model.cuda()
- self.cond_stage_model = self.cond_stage_model.cuda()
diff --git a/spaces/Cvandi/remake/realesrgan/models/realesrgan_model.py b/spaces/Cvandi/remake/realesrgan/models/realesrgan_model.py
deleted file mode 100644
index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000
--- a/spaces/Cvandi/remake/realesrgan/models/realesrgan_model.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from collections import OrderedDict
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/DAMO-NLP-SG/CLEX-Chat/README.md b/spaces/DAMO-NLP-SG/CLEX-Chat/README.md
deleted file mode 100644
index 50ce3572b5fd93d26e6e0c3cf97307833cf00a09..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/CLEX-Chat/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: CLEX Chat
-emoji: 📈
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/tables.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/tables.py
deleted file mode 100644
index 394541b8a4d3355784ef84a04aaaa7501e4dc201..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/merge/tables.py
+++ /dev/null
@@ -1,338 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod, Roozbeh Pournader
-
-from fontTools import ttLib, cffLib
-from fontTools.misc.psCharStrings import T2WidthExtractor
-from fontTools.ttLib.tables.DefaultTable import DefaultTable
-from fontTools.merge.base import add_method, mergeObjects
-from fontTools.merge.cmap import computeMegaCmap
-from fontTools.merge.util import *
-import logging
-
-
-log = logging.getLogger("fontTools.merge")
-
-
-ttLib.getTableClass("maxp").mergeMap = {
- "*": max,
- "tableTag": equal,
- "tableVersion": equal,
- "numGlyphs": sum,
- "maxStorage": first,
- "maxFunctionDefs": first,
- "maxInstructionDefs": first,
- # TODO When we correctly merge hinting data, update these values:
- # maxFunctionDefs, maxInstructionDefs, maxSizeOfInstructions
-}
-
-headFlagsMergeBitMap = {
- "size": 16,
- "*": bitwise_or,
- 1: bitwise_and, # Baseline at y = 0
- 2: bitwise_and, # lsb at x = 0
- 3: bitwise_and, # Force ppem to integer values. FIXME?
- 5: bitwise_and, # Font is vertical
- 6: lambda bit: 0, # Always set to zero
- 11: bitwise_and, # Font data is 'lossless'
- 13: bitwise_and, # Optimized for ClearType
- 14: bitwise_and, # Last resort font. FIXME? equal or first may be better
- 15: lambda bit: 0, # Always set to zero
-}
-
-ttLib.getTableClass("head").mergeMap = {
- "tableTag": equal,
- "tableVersion": max,
- "fontRevision": max,
- "checkSumAdjustment": lambda lst: 0, # We need *something* here
- "magicNumber": equal,
- "flags": mergeBits(headFlagsMergeBitMap),
- "unitsPerEm": equal,
- "created": current_time,
- "modified": current_time,
- "xMin": min,
- "yMin": min,
- "xMax": max,
- "yMax": max,
- "macStyle": first,
- "lowestRecPPEM": max,
- "fontDirectionHint": lambda lst: 2,
- "indexToLocFormat": first,
- "glyphDataFormat": equal,
-}
-
-ttLib.getTableClass("hhea").mergeMap = {
- "*": equal,
- "tableTag": equal,
- "tableVersion": max,
- "ascent": max,
- "descent": min,
- "lineGap": max,
- "advanceWidthMax": max,
- "minLeftSideBearing": min,
- "minRightSideBearing": min,
- "xMaxExtent": max,
- "caretSlopeRise": first,
- "caretSlopeRun": first,
- "caretOffset": first,
- "numberOfHMetrics": recalculate,
-}
-
-ttLib.getTableClass("vhea").mergeMap = {
- "*": equal,
- "tableTag": equal,
- "tableVersion": max,
- "ascent": max,
- "descent": min,
- "lineGap": max,
- "advanceHeightMax": max,
- "minTopSideBearing": min,
- "minBottomSideBearing": min,
- "yMaxExtent": max,
- "caretSlopeRise": first,
- "caretSlopeRun": first,
- "caretOffset": first,
- "numberOfVMetrics": recalculate,
-}
-
-os2FsTypeMergeBitMap = {
- "size": 16,
- "*": lambda bit: 0,
- 1: bitwise_or, # no embedding permitted
- 2: bitwise_and, # allow previewing and printing documents
- 3: bitwise_and, # allow editing documents
- 8: bitwise_or, # no subsetting permitted
- 9: bitwise_or, # no embedding of outlines permitted
-}
-
-
-def mergeOs2FsType(lst):
- lst = list(lst)
- if all(item == 0 for item in lst):
- return 0
-
- # Compute least restrictive logic for each fsType value
- for i in range(len(lst)):
- # unset bit 1 (no embedding permitted) if either bit 2 or 3 is set
- if lst[i] & 0x000C:
- lst[i] &= ~0x0002
- # set bit 2 (allow previewing) if bit 3 is set (allow editing)
- elif lst[i] & 0x0008:
- lst[i] |= 0x0004
- # set bits 2 and 3 if everything is allowed
- elif lst[i] == 0:
- lst[i] = 0x000C
-
- fsType = mergeBits(os2FsTypeMergeBitMap)(lst)
- # unset bits 2 and 3 if bit 1 is set (some font is "no embedding")
- if fsType & 0x0002:
- fsType &= ~0x000C
- return fsType
-
-
-ttLib.getTableClass("OS/2").mergeMap = {
- "*": first,
- "tableTag": equal,
- "version": max,
- "xAvgCharWidth": first, # Will be recalculated at the end on the merged font
- "fsType": mergeOs2FsType, # Will be overwritten
- "panose": first, # FIXME: should really be the first Latin font
- "ulUnicodeRange1": bitwise_or,
- "ulUnicodeRange2": bitwise_or,
- "ulUnicodeRange3": bitwise_or,
- "ulUnicodeRange4": bitwise_or,
- "fsFirstCharIndex": min,
- "fsLastCharIndex": max,
- "sTypoAscender": max,
- "sTypoDescender": min,
- "sTypoLineGap": max,
- "usWinAscent": max,
- "usWinDescent": max,
- # Version 1
- "ulCodePageRange1": onlyExisting(bitwise_or),
- "ulCodePageRange2": onlyExisting(bitwise_or),
- # Version 2, 3, 4
- "sxHeight": onlyExisting(max),
- "sCapHeight": onlyExisting(max),
- "usDefaultChar": onlyExisting(first),
- "usBreakChar": onlyExisting(first),
- "usMaxContext": onlyExisting(max),
- # version 5
- "usLowerOpticalPointSize": onlyExisting(min),
- "usUpperOpticalPointSize": onlyExisting(max),
-}
-
-
-@add_method(ttLib.getTableClass("OS/2"))
-def merge(self, m, tables):
- DefaultTable.merge(self, m, tables)
- if self.version < 2:
- # bits 8 and 9 are reserved and should be set to zero
- self.fsType &= ~0x0300
- if self.version >= 3:
- # Only one of bits 1, 2, and 3 may be set. We already take
- # care of bit 1 implications in mergeOs2FsType. So unset
- # bit 2 if bit 3 is already set.
- if self.fsType & 0x0008:
- self.fsType &= ~0x0004
- return self
-
-
-ttLib.getTableClass("post").mergeMap = {
- "*": first,
- "tableTag": equal,
- "formatType": max,
- "isFixedPitch": min,
- "minMemType42": max,
- "maxMemType42": lambda lst: 0,
- "minMemType1": max,
- "maxMemType1": lambda lst: 0,
- "mapping": onlyExisting(sumDicts),
- "extraNames": lambda lst: [],
-}
-
-ttLib.getTableClass("vmtx").mergeMap = ttLib.getTableClass("hmtx").mergeMap = {
- "tableTag": equal,
- "metrics": sumDicts,
-}
-
-ttLib.getTableClass("name").mergeMap = {
- "tableTag": equal,
- "names": first, # FIXME? Does mixing name records make sense?
-}
-
-ttLib.getTableClass("loca").mergeMap = {
- "*": recalculate,
- "tableTag": equal,
-}
-
-ttLib.getTableClass("glyf").mergeMap = {
- "tableTag": equal,
- "glyphs": sumDicts,
- "glyphOrder": sumLists,
- "axisTags": equal,
-}
-
-
-@add_method(ttLib.getTableClass("glyf"))
-def merge(self, m, tables):
- for i, table in enumerate(tables):
- for g in table.glyphs.values():
- if i:
- # Drop hints for all but first font, since
- # we don't map functions / CVT values.
- g.removeHinting()
- # Expand composite glyphs to load their
- # composite glyph names.
- if g.isComposite() or g.isVarComposite():
- g.expand(table)
- return DefaultTable.merge(self, m, tables)
-
-
-ttLib.getTableClass("prep").mergeMap = lambda self, lst: first(lst)
-ttLib.getTableClass("fpgm").mergeMap = lambda self, lst: first(lst)
-ttLib.getTableClass("cvt ").mergeMap = lambda self, lst: first(lst)
-ttLib.getTableClass("gasp").mergeMap = lambda self, lst: first(
- lst
-) # FIXME? Appears irreconcilable
-
-
-@add_method(ttLib.getTableClass("CFF "))
-def merge(self, m, tables):
- if any(hasattr(table.cff[0], "FDSelect") for table in tables):
- raise NotImplementedError("Merging CID-keyed CFF tables is not supported yet")
-
- for table in tables:
- table.cff.desubroutinize()
-
- newcff = tables[0]
- newfont = newcff.cff[0]
- private = newfont.Private
- newDefaultWidthX, newNominalWidthX = private.defaultWidthX, private.nominalWidthX
- storedNamesStrings = []
- glyphOrderStrings = []
- glyphOrder = set(newfont.getGlyphOrder())
-
- for name in newfont.strings.strings:
- if name not in glyphOrder:
- storedNamesStrings.append(name)
- else:
- glyphOrderStrings.append(name)
-
- chrset = list(newfont.charset)
- newcs = newfont.CharStrings
- log.debug("FONT 0 CharStrings: %d.", len(newcs))
-
- for i, table in enumerate(tables[1:], start=1):
- font = table.cff[0]
- defaultWidthX, nominalWidthX = (
- font.Private.defaultWidthX,
- font.Private.nominalWidthX,
- )
- widthsDiffer = (
- defaultWidthX != newDefaultWidthX or nominalWidthX != newNominalWidthX
- )
- font.Private = private
- fontGlyphOrder = set(font.getGlyphOrder())
- for name in font.strings.strings:
- if name in fontGlyphOrder:
- glyphOrderStrings.append(name)
- cs = font.CharStrings
- gs = table.cff.GlobalSubrs
- log.debug("Font %d CharStrings: %d.", i, len(cs))
- chrset.extend(font.charset)
- if newcs.charStringsAreIndexed:
- for i, name in enumerate(cs.charStrings, start=len(newcs)):
- newcs.charStrings[name] = i
- newcs.charStringsIndex.items.append(None)
- for name in cs.charStrings:
- if widthsDiffer:
- c = cs[name]
- defaultWidthXToken = object()
- extractor = T2WidthExtractor([], [], nominalWidthX, defaultWidthXToken)
- extractor.execute(c)
- width = extractor.width
- if width is not defaultWidthXToken:
- c.program.pop(0)
- else:
- width = defaultWidthX
- if width != newDefaultWidthX:
- c.program.insert(0, width - newNominalWidthX)
- newcs[name] = cs[name]
-
- newfont.charset = chrset
- newfont.numGlyphs = len(chrset)
- newfont.strings.strings = glyphOrderStrings + storedNamesStrings
-
- return newcff
-
-
-@add_method(ttLib.getTableClass("cmap"))
-def merge(self, m, tables):
- # TODO Handle format=14.
- if not hasattr(m, "cmap"):
- computeMegaCmap(m, tables)
- cmap = m.cmap
-
- cmapBmpOnly = {uni: gid for uni, gid in cmap.items() if uni <= 0xFFFF}
- self.tables = []
- module = ttLib.getTableModule("cmap")
- if len(cmapBmpOnly) != len(cmap):
- # format-12 required.
- cmapTable = module.cmap_classes[12](12)
- cmapTable.platformID = 3
- cmapTable.platEncID = 10
- cmapTable.language = 0
- cmapTable.cmap = cmap
- self.tables.append(cmapTable)
- # always create format-4
- cmapTable = module.cmap_classes[4](4)
- cmapTable.platformID = 3
- cmapTable.platEncID = 1
- cmapTable.language = 0
- cmapTable.cmap = cmapBmpOnly
- # ordered by platform then encoding
- self.tables.insert(0, cmapTable)
- self.tableVersion = 0
- self.numSubTables = len(self.tables)
- return self
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
deleted file mode 100644
index 6c00aaf63dea48bd96e718809319f3e27c08567e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
+++ /dev/null
@@ -1,1578 +0,0 @@
-from fontTools.misc.textTools import bytesjoin, safeEval, readHex
-from fontTools.misc.encodingTools import getEncoding
-from fontTools.ttLib import getSearchRange
-from fontTools.unicode import Unicode
-from . import DefaultTable
-import sys
-import struct
-import array
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-def _make_map(font, chars, gids):
- assert len(chars) == len(gids)
- glyphNames = font.getGlyphNameMany(gids)
- cmap = {}
- for char, gid, name in zip(chars, gids, glyphNames):
- if gid == 0:
- continue
- cmap[char] = name
- return cmap
-
-
-class table__c_m_a_p(DefaultTable.DefaultTable):
- """Character to Glyph Index Mapping Table
-
- This class represents the `cmap `_
- table, which maps between input characters (in Unicode or other system encodings)
- and glyphs within the font. The ``cmap`` table contains one or more subtables
- which determine the mapping of of characters to glyphs across different platforms
- and encoding systems.
-
- ``table__c_m_a_p`` objects expose an accessor ``.tables`` which provides access
- to the subtables, although it is normally easier to retrieve individual subtables
- through the utility methods described below. To add new subtables to a font,
- first determine the subtable format (if in doubt use format 4 for glyphs within
- the BMP, format 12 for glyphs outside the BMP, and format 14 for Unicode Variation
- Sequences) construct subtable objects with ``CmapSubtable.newSubtable(format)``,
- and append them to the ``.tables`` list.
-
- Within a subtable, the mapping of characters to glyphs is provided by the ``.cmap``
- attribute.
-
- Example::
-
- cmap4_0_3 = CmapSubtable.newSubtable(4)
- cmap4_0_3.platformID = 0
- cmap4_0_3.platEncID = 3
- cmap4_0_3.language = 0
- cmap4_0_3.cmap = { 0xC1: "Aacute" }
-
- cmap = newTable("cmap")
- cmap.tableVersion = 0
- cmap.tables = [cmap4_0_3]
- """
-
- def getcmap(self, platformID, platEncID):
- """Returns the first subtable which matches the given platform and encoding.
-
- Args:
- platformID (int): The platform ID. Use 0 for Unicode, 1 for Macintosh
- (deprecated for new fonts), 2 for ISO (deprecated) and 3 for Windows.
- encodingID (int): Encoding ID. Interpretation depends on the platform ID.
- See the OpenType specification for details.
-
- Returns:
- An object which is a subclass of :py:class:`CmapSubtable` if a matching
- subtable is found within the font, or ``None`` otherwise.
- """
-
- for subtable in self.tables:
- if subtable.platformID == platformID and subtable.platEncID == platEncID:
- return subtable
- return None # not found
-
- def getBestCmap(
- self,
- cmapPreferences=(
- (3, 10),
- (0, 6),
- (0, 4),
- (3, 1),
- (0, 3),
- (0, 2),
- (0, 1),
- (0, 0),
- ),
- ):
- """Returns the 'best' Unicode cmap dictionary available in the font
- or ``None``, if no Unicode cmap subtable is available.
-
- By default it will search for the following (platformID, platEncID)
- pairs in order::
-
- (3, 10), # Windows Unicode full repertoire
- (0, 6), # Unicode full repertoire (format 13 subtable)
- (0, 4), # Unicode 2.0 full repertoire
- (3, 1), # Windows Unicode BMP
- (0, 3), # Unicode 2.0 BMP
- (0, 2), # Unicode ISO/IEC 10646
- (0, 1), # Unicode 1.1
- (0, 0) # Unicode 1.0
-
- This particular order matches what HarfBuzz uses to choose what
- subtable to use by default. This order prefers the largest-repertoire
- subtable, and among those, prefers the Windows-platform over the
- Unicode-platform as the former has wider support.
-
- This order can be customized via the ``cmapPreferences`` argument.
- """
- for platformID, platEncID in cmapPreferences:
- cmapSubtable = self.getcmap(platformID, platEncID)
- if cmapSubtable is not None:
- return cmapSubtable.cmap
- return None # None of the requested cmap subtables were found
-
- def buildReversed(self):
- """Builds a reverse mapping dictionary
-
- Iterates over all Unicode cmap tables and returns a dictionary mapping
- glyphs to sets of codepoints, such as::
-
- {
- 'one': {0x31}
- 'A': {0x41,0x391}
- }
-
- The values are sets of Unicode codepoints because
- some fonts map different codepoints to the same glyph.
- For example, ``U+0041 LATIN CAPITAL LETTER A`` and ``U+0391
- GREEK CAPITAL LETTER ALPHA`` are sometimes the same glyph.
- """
- result = {}
- for subtable in self.tables:
- if subtable.isUnicode():
- for codepoint, name in subtable.cmap.items():
- result.setdefault(name, set()).add(codepoint)
- return result
-
- def decompile(self, data, ttFont):
- tableVersion, numSubTables = struct.unpack(">HH", data[:4])
- self.tableVersion = int(tableVersion)
- self.tables = tables = []
- seenOffsets = {}
- for i in range(numSubTables):
- platformID, platEncID, offset = struct.unpack(
- ">HHl", data[4 + i * 8 : 4 + (i + 1) * 8]
- )
- platformID, platEncID = int(platformID), int(platEncID)
- format, length = struct.unpack(">HH", data[offset : offset + 4])
- if format in [8, 10, 12, 13]:
- format, reserved, length = struct.unpack(
- ">HHL", data[offset : offset + 8]
- )
- elif format in [14]:
- format, length = struct.unpack(">HL", data[offset : offset + 6])
-
- if not length:
- log.error(
- "cmap subtable is reported as having zero length: platformID %s, "
- "platEncID %s, format %s offset %s. Skipping table.",
- platformID,
- platEncID,
- format,
- offset,
- )
- continue
- table = CmapSubtable.newSubtable(format)
- table.platformID = platformID
- table.platEncID = platEncID
- # Note that by default we decompile only the subtable header info;
- # any other data gets decompiled only when an attribute of the
- # subtable is referenced.
- table.decompileHeader(data[offset : offset + int(length)], ttFont)
- if offset in seenOffsets:
- table.data = None # Mark as decompiled
- table.cmap = tables[seenOffsets[offset]].cmap
- else:
- seenOffsets[offset] = i
- tables.append(table)
- if ttFont.lazy is False: # Be lazy for None and True
- self.ensureDecompiled()
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- for st in self.tables:
- st.ensureDecompiled()
-
- def compile(self, ttFont):
- self.tables.sort() # sort according to the spec; see CmapSubtable.__lt__()
- numSubTables = len(self.tables)
- totalOffset = 4 + 8 * numSubTables
- data = struct.pack(">HH", self.tableVersion, numSubTables)
- tableData = b""
- seen = (
- {}
- ) # Some tables are the same object reference. Don't compile them twice.
- done = (
- {}
- ) # Some tables are different objects, but compile to the same data chunk
- for table in self.tables:
- offset = seen.get(id(table.cmap))
- if offset is None:
- chunk = table.compile(ttFont)
- offset = done.get(chunk)
- if offset is None:
- offset = seen[id(table.cmap)] = done[chunk] = totalOffset + len(
- tableData
- )
- tableData = tableData + chunk
- data = data + struct.pack(">HHl", table.platformID, table.platEncID, offset)
- return data + tableData
-
- def toXML(self, writer, ttFont):
- writer.simpletag("tableVersion", version=self.tableVersion)
- writer.newline()
- for table in self.tables:
- table.toXML(writer, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "tableVersion":
- self.tableVersion = safeEval(attrs["version"])
- return
- if name[:12] != "cmap_format_":
- return
- if not hasattr(self, "tables"):
- self.tables = []
- format = safeEval(name[12:])
- table = CmapSubtable.newSubtable(format)
- table.platformID = safeEval(attrs["platformID"])
- table.platEncID = safeEval(attrs["platEncID"])
- table.fromXML(name, attrs, content, ttFont)
- self.tables.append(table)
-
-
-class CmapSubtable(object):
- """Base class for all cmap subtable formats.
-
- Subclasses which handle the individual subtable formats are named
- ``cmap_format_0``, ``cmap_format_2`` etc. Use :py:meth:`getSubtableClass`
- to retrieve the concrete subclass, or :py:meth:`newSubtable` to get a
- new subtable object for a given format.
-
- The object exposes a ``.cmap`` attribute, which contains a dictionary mapping
- character codepoints to glyph names.
- """
-
- @staticmethod
- def getSubtableClass(format):
- """Return the subtable class for a format."""
- return cmap_classes.get(format, cmap_format_unknown)
-
- @staticmethod
- def newSubtable(format):
- """Return a new instance of a subtable for the given format
- ."""
- subtableClass = CmapSubtable.getSubtableClass(format)
- return subtableClass(format)
-
- def __init__(self, format):
- self.format = format
- self.data = None
- self.ttFont = None
- self.platformID = None #: The platform ID of this subtable
- self.platEncID = None #: The encoding ID of this subtable (interpretation depends on ``platformID``)
- self.language = (
- None #: The language ID of this subtable (Macintosh platform only)
- )
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- if self.data is None:
- return
- self.decompile(None, None) # use saved data.
- self.data = None # Once this table has been decompiled, make sure we don't
- # just return the original data. Also avoids recursion when
- # called with an attribute that the cmap subtable doesn't have.
-
- def __getattr__(self, attr):
- # allow lazy decompilation of subtables.
- if attr[:2] == "__": # don't handle requests for member functions like '__lt__'
- raise AttributeError(attr)
- if self.data is None:
- raise AttributeError(attr)
- self.ensureDecompiled()
- return getattr(self, attr)
-
- def decompileHeader(self, data, ttFont):
- format, length, language = struct.unpack(">HHH", data[:6])
- assert (
- len(data) == length
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- format,
- len(data),
- length,
- )
- self.format = int(format)
- self.length = int(length)
- self.language = int(language)
- self.data = data[6:]
- self.ttFont = ttFont
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("language", self.language),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def getEncoding(self, default=None):
- """Returns the Python encoding name for this cmap subtable based on its platformID,
- platEncID, and language. If encoding for these values is not known, by default
- ``None`` is returned. That can be overridden by passing a value to the ``default``
- argument.
-
- Note that if you want to choose a "preferred" cmap subtable, most of the time
- ``self.isUnicode()`` is what you want as that one only returns true for the modern,
- commonly used, Unicode-compatible triplets, not the legacy ones.
- """
- return getEncoding(self.platformID, self.platEncID, self.language, default)
-
- def isUnicode(self):
- """Returns true if the characters are interpreted as Unicode codepoints."""
- return self.platformID == 0 or (
- self.platformID == 3 and self.platEncID in [0, 1, 10]
- )
-
- def isSymbol(self):
- """Returns true if the subtable is for the Symbol encoding (3,0)"""
- return self.platformID == 3 and self.platEncID == 0
-
- def _writeCodes(self, codes, writer):
- isUnicode = self.isUnicode()
- for code, name in codes:
- writer.simpletag("map", code=hex(code), name=name)
- if isUnicode:
- writer.comment(Unicode[code])
- writer.newline()
-
- def __lt__(self, other):
- if not isinstance(other, CmapSubtable):
- return NotImplemented
-
- # implemented so that list.sort() sorts according to the spec.
- selfTuple = (
- getattr(self, "platformID", None),
- getattr(self, "platEncID", None),
- getattr(self, "language", None),
- self.__dict__,
- )
- otherTuple = (
- getattr(other, "platformID", None),
- getattr(other, "platEncID", None),
- getattr(other, "language", None),
- other.__dict__,
- )
- return selfTuple < otherTuple
-
-
-class cmap_format_0(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- assert 262 == self.length, "Format 0 cmap subtable not 262 bytes"
- gids = array.array("B")
- gids.frombytes(self.data)
- charCodes = list(range(len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return struct.pack(">HHH", 0, 262, self.language) + self.data
-
- cmap = self.cmap
- assert set(cmap.keys()).issubset(range(256))
- getGlyphID = ttFont.getGlyphID
- valueList = [getGlyphID(cmap[i]) if i in cmap else 0 for i in range(256)]
-
- gids = array.array("B", valueList)
- data = struct.pack(">HHH", 0, 262, self.language) + gids.tobytes()
- assert len(data) == 262
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-subHeaderFormat = ">HHhH"
-
-
-class SubHeader(object):
- def __init__(self):
- self.firstCode = None
- self.entryCount = None
- self.idDelta = None
- self.idRangeOffset = None
- self.glyphIndexArray = []
-
-
-class cmap_format_2(CmapSubtable):
- def setIDDelta(self, subHeader):
- subHeader.idDelta = 0
- # find the minGI which is not zero.
- minGI = subHeader.glyphIndexArray[0]
- for gid in subHeader.glyphIndexArray:
- if (gid != 0) and (gid < minGI):
- minGI = gid
- # The lowest gid in glyphIndexArray, after subtracting idDelta, must be 1.
- # idDelta is a short, and must be between -32K and 32K. minGI can be between 1 and 64K.
- # We would like to pick an idDelta such that the first glyphArray GID is 1,
- # so that we are more likely to be able to combine glypharray GID subranges.
- # This means that we have a problem when minGI is > 32K
- # Since the final gi is reconstructed from the glyphArray GID by:
- # (short)finalGID = (gid + idDelta) % 0x10000),
- # we can get from a glypharray GID of 1 to a final GID of 65K by subtracting 2, and casting the
- # negative number to an unsigned short.
-
- if minGI > 1:
- if minGI > 0x7FFF:
- subHeader.idDelta = -(0x10000 - minGI) - 1
- else:
- subHeader.idDelta = minGI - 1
- idDelta = subHeader.idDelta
- for i in range(subHeader.entryCount):
- gid = subHeader.glyphIndexArray[i]
- if gid > 0:
- subHeader.glyphIndexArray[i] = gid - idDelta
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- subHeaderKeys = []
- maxSubHeaderindex = 0
- # get the key array, and determine the number of subHeaders.
- allKeys = array.array("H")
- allKeys.frombytes(data[:512])
- data = data[512:]
- if sys.byteorder != "big":
- allKeys.byteswap()
- subHeaderKeys = [key // 8 for key in allKeys]
- maxSubHeaderindex = max(subHeaderKeys)
-
- # Load subHeaders
- subHeaderList = []
- pos = 0
- for i in range(maxSubHeaderindex + 1):
- subHeader = SubHeader()
- (
- subHeader.firstCode,
- subHeader.entryCount,
- subHeader.idDelta,
- subHeader.idRangeOffset,
- ) = struct.unpack(subHeaderFormat, data[pos : pos + 8])
- pos += 8
- giDataPos = pos + subHeader.idRangeOffset - 2
- giList = array.array("H")
- giList.frombytes(data[giDataPos : giDataPos + subHeader.entryCount * 2])
- if sys.byteorder != "big":
- giList.byteswap()
- subHeader.glyphIndexArray = giList
- subHeaderList.append(subHeader)
- # How this gets processed.
- # Charcodes may be one or two bytes.
- # The first byte of a charcode is mapped through the subHeaderKeys, to select
- # a subHeader. For any subheader but 0, the next byte is then mapped through the
- # selected subheader. If subheader Index 0 is selected, then the byte itself is
- # mapped through the subheader, and there is no second byte.
- # Then assume that the subsequent byte is the first byte of the next charcode,and repeat.
- #
- # Each subheader references a range in the glyphIndexArray whose length is entryCount.
- # The range in glyphIndexArray referenced by a sunheader may overlap with the range in glyphIndexArray
- # referenced by another subheader.
- # The only subheader that will be referenced by more than one first-byte value is the subheader
- # that maps the entire range of glyphID values to glyphIndex 0, e.g notdef:
- # {firstChar 0, EntryCount 0,idDelta 0,idRangeOffset xx}
- # A byte being mapped though a subheader is treated as in index into a mapping of array index to font glyphIndex.
- # A subheader specifies a subrange within (0...256) by the
- # firstChar and EntryCount values. If the byte value is outside the subrange, then the glyphIndex is zero
- # (e.g. glyph not in font).
- # If the byte index is in the subrange, then an offset index is calculated as (byteIndex - firstChar).
- # The index to glyphIndex mapping is a subrange of the glyphIndexArray. You find the start of the subrange by
- # counting idRangeOffset bytes from the idRangeOffset word. The first value in this subrange is the
- # glyphIndex for the index firstChar. The offset index should then be used in this array to get the glyphIndex.
- # Example for Logocut-Medium
- # first byte of charcode = 129; selects subheader 1.
- # subheader 1 = {firstChar 64, EntryCount 108,idDelta 42,idRangeOffset 0252}
- # second byte of charCode = 66
- # the index offset = 66-64 = 2.
- # The subrange of the glyphIndexArray starting at 0x0252 bytes from the idRangeOffset word is:
- # [glyphIndexArray index], [subrange array index] = glyphIndex
- # [256], [0]=1 from charcode [129, 64]
- # [257], [1]=2 from charcode [129, 65]
- # [258], [2]=3 from charcode [129, 66]
- # [259], [3]=4 from charcode [129, 67]
- # So, the glyphIndex = 3 from the array. Then if idDelta is not zero and the glyph ID is not zero,
- # add it to the glyphID to get the final glyphIndex
- # value. In this case the final glyph index = 3+ 42 -> 45 for the final glyphIndex. Whew!
-
- self.data = b""
- cmap = {}
- notdefGI = 0
- for firstByte in range(256):
- subHeadindex = subHeaderKeys[firstByte]
- subHeader = subHeaderList[subHeadindex]
- if subHeadindex == 0:
- if (firstByte < subHeader.firstCode) or (
- firstByte >= subHeader.firstCode + subHeader.entryCount
- ):
- continue # gi is notdef.
- else:
- charCode = firstByte
- offsetIndex = firstByte - subHeader.firstCode
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue # gi is notdef.
- cmap[charCode] = gi
- else:
- if subHeader.entryCount:
- charCodeOffset = firstByte * 256 + subHeader.firstCode
- for offsetIndex in range(subHeader.entryCount):
- charCode = charCodeOffset + offsetIndex
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue
- cmap[charCode] = gi
- # If not subHeader.entryCount, then all char codes with this first byte are
- # mapped to .notdef. We can skip this subtable, and leave the glyphs un-encoded, which is the
- # same as mapping it to .notdef.
-
- gids = list(cmap.values())
- charCodes = list(cmap.keys())
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- kEmptyTwoCharCodeRange = -1
- notdefGI = 0
-
- items = sorted(self.cmap.items())
- charCodes = [item[0] for item in items]
- names = [item[1] for item in items]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 2 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- # Process the (char code to gid) item list in char code order.
- # By definition, all one byte char codes map to subheader 0.
- # For all the two byte char codes, we assume that the first byte maps maps to the empty subhead (with an entry count of 0,
- # which defines all char codes in its range to map to notdef) unless proven otherwise.
- # Note that since the char code items are processed in char code order, all the char codes with the
- # same first byte are in sequential order.
-
- subHeaderKeys = [
- kEmptyTwoCharCodeRange for x in range(256)
- ] # list of indices into subHeaderList.
- subHeaderList = []
-
- # We force this subheader entry 0 to exist in the subHeaderList in the case where some one comes up
- # with a cmap where all the one byte char codes map to notdef,
- # with the result that the subhead 0 would not get created just by processing the item list.
- charCode = charCodes[0]
- if charCode > 255:
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 0
- subHeaderList.append(subHeader)
-
- lastFirstByte = -1
- items = zip(charCodes, gids)
- for charCode, gid in items:
- if gid == 0:
- continue
- firstbyte = charCode >> 8
- secondByte = charCode & 0x00FF
-
- if (
- firstbyte != lastFirstByte
- ): # Need to update the current subhead, and start a new one.
- if lastFirstByte > -1:
- # fix GI's and iDelta of current subheader.
- self.setIDDelta(subHeader)
-
- # If it was sunheader 0 for one-byte charCodes, then we need to set the subHeaderKeys value to zero
- # for the indices matching the char codes.
- if lastFirstByte == 0:
- for index in range(subHeader.entryCount):
- charCode = subHeader.firstCode + index
- subHeaderKeys[charCode] = 0
-
- assert subHeader.entryCount == len(
- subHeader.glyphIndexArray
- ), "Error - subhead entry count does not match len of glyphID subrange."
- # init new subheader
- subHeader = SubHeader()
- subHeader.firstCode = secondByte
- subHeader.entryCount = 1
- subHeader.glyphIndexArray.append(gid)
- subHeaderList.append(subHeader)
- subHeaderKeys[firstbyte] = len(subHeaderList) - 1
- lastFirstByte = firstbyte
- else:
- # need to fill in with notdefs all the code points between the last charCode and the current charCode.
- codeDiff = secondByte - (subHeader.firstCode + subHeader.entryCount)
- for i in range(codeDiff):
- subHeader.glyphIndexArray.append(notdefGI)
- subHeader.glyphIndexArray.append(gid)
- subHeader.entryCount = subHeader.entryCount + codeDiff + 1
-
- # fix GI's and iDelta of last subheader that we we added to the subheader array.
- self.setIDDelta(subHeader)
-
- # Now we add a final subheader for the subHeaderKeys which maps to empty two byte charcode ranges.
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 2
- subHeaderList.append(subHeader)
- emptySubheadIndex = len(subHeaderList) - 1
- for index in range(256):
- if subHeaderKeys[index] == kEmptyTwoCharCodeRange:
- subHeaderKeys[index] = emptySubheadIndex
- # Since this is the last subheader, the GlyphIndex Array starts two bytes after the start of the
- # idRangeOffset word of this subHeader. We can safely point to the first entry in the GlyphIndexArray,
- # since the first subrange of the GlyphIndexArray is for subHeader 0, which always starts with
- # charcode 0 and GID 0.
-
- idRangeOffset = (
- len(subHeaderList) - 1
- ) * 8 + 2 # offset to beginning of glyphIDArray from first subheader idRangeOffset.
- subheadRangeLen = (
- len(subHeaderList) - 1
- ) # skip last special empty-set subheader; we've already hardocodes its idRangeOffset to 2.
- for index in range(subheadRangeLen):
- subHeader = subHeaderList[index]
- subHeader.idRangeOffset = 0
- for j in range(index):
- prevSubhead = subHeaderList[j]
- if (
- prevSubhead.glyphIndexArray == subHeader.glyphIndexArray
- ): # use the glyphIndexArray subarray
- subHeader.idRangeOffset = (
- prevSubhead.idRangeOffset - (index - j) * 8
- )
- subHeader.glyphIndexArray = []
- break
- if subHeader.idRangeOffset == 0: # didn't find one.
- subHeader.idRangeOffset = idRangeOffset
- idRangeOffset = (
- idRangeOffset - 8
- ) + subHeader.entryCount * 2 # one less subheader, one more subArray.
- else:
- idRangeOffset = idRangeOffset - 8 # one less subheader
-
- # Now we can write out the data!
- length = (
- 6 + 512 + 8 * len(subHeaderList)
- ) # header, 256 subHeaderKeys, and subheader array.
- for subhead in subHeaderList[:-1]:
- length = (
- length + len(subhead.glyphIndexArray) * 2
- ) # We can't use subhead.entryCount, as some of the subhead may share subArrays.
- dataList = [struct.pack(">HHH", 2, length, self.language)]
- for index in subHeaderKeys:
- dataList.append(struct.pack(">H", index * 8))
- for subhead in subHeaderList:
- dataList.append(
- struct.pack(
- subHeaderFormat,
- subhead.firstCode,
- subhead.entryCount,
- subhead.idDelta,
- subhead.idRangeOffset,
- )
- )
- for subhead in subHeaderList[:-1]:
- for gi in subhead.glyphIndexArray:
- dataList.append(struct.pack(">H", gi))
- data = bytesjoin(dataList)
- assert len(data) == length, (
- "Error: cmap format 2 is not same length as calculated! actual: "
- + str(len(data))
- + " calc : "
- + str(length)
- )
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-cmap_format_4_format = ">7H"
-
-# uint16 endCode[segCount] # Ending character code for each segment, last = 0xFFFF.
-# uint16 reservedPad # This value should be zero
-# uint16 startCode[segCount] # Starting character code for each segment
-# uint16 idDelta[segCount] # Delta for all character codes in segment
-# uint16 idRangeOffset[segCount] # Offset in bytes to glyph indexArray, or 0
-# uint16 glyphIndexArray[variable] # Glyph index array
-
-
-def splitRange(startCode, endCode, cmap):
- # Try to split a range of character codes into subranges with consecutive
- # glyph IDs in such a way that the cmap4 subtable can be stored "most"
- # efficiently. I can't prove I've got the optimal solution, but it seems
- # to do well with the fonts I tested: none became bigger, many became smaller.
- if startCode == endCode:
- return [], [endCode]
-
- lastID = cmap[startCode]
- lastCode = startCode
- inOrder = None
- orderedBegin = None
- subRanges = []
-
- # Gather subranges in which the glyph IDs are consecutive.
- for code in range(startCode + 1, endCode + 1):
- glyphID = cmap[code]
-
- if glyphID - 1 == lastID:
- if inOrder is None or not inOrder:
- inOrder = 1
- orderedBegin = lastCode
- else:
- if inOrder:
- inOrder = 0
- subRanges.append((orderedBegin, lastCode))
- orderedBegin = None
-
- lastID = glyphID
- lastCode = code
-
- if inOrder:
- subRanges.append((orderedBegin, lastCode))
- assert lastCode == endCode
-
- # Now filter out those new subranges that would only make the data bigger.
- # A new segment cost 8 bytes, not using a new segment costs 2 bytes per
- # character.
- newRanges = []
- for b, e in subRanges:
- if b == startCode and e == endCode:
- break # the whole range, we're fine
- if b == startCode or e == endCode:
- threshold = 4 # split costs one more segment
- else:
- threshold = 8 # split costs two more segments
- if (e - b + 1) > threshold:
- newRanges.append((b, e))
- subRanges = newRanges
-
- if not subRanges:
- return [], [endCode]
-
- if subRanges[0][0] != startCode:
- subRanges.insert(0, (startCode, subRanges[0][0] - 1))
- if subRanges[-1][1] != endCode:
- subRanges.append((subRanges[-1][1] + 1, endCode))
-
- # Fill the "holes" in the segments list -- those are the segments in which
- # the glyph IDs are _not_ consecutive.
- i = 1
- while i < len(subRanges):
- if subRanges[i - 1][1] + 1 != subRanges[i][0]:
- subRanges.insert(i, (subRanges[i - 1][1] + 1, subRanges[i][0] - 1))
- i = i + 1
- i = i + 1
-
- # Transform the ranges into startCode/endCode lists.
- start = []
- end = []
- for b, e in subRanges:
- start.append(b)
- end.append(e)
- start.pop(0)
-
- assert len(start) + 1 == len(end)
- return start, end
-
-
-class cmap_format_4(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- (segCountX2, searchRange, entrySelector, rangeShift) = struct.unpack(
- ">4H", data[:8]
- )
- data = data[8:]
- segCount = segCountX2 // 2
-
- allCodes = array.array("H")
- allCodes.frombytes(data)
- self.data = data = None
-
- if sys.byteorder != "big":
- allCodes.byteswap()
-
- # divide the data
- endCode = allCodes[:segCount]
- allCodes = allCodes[segCount + 1 :] # the +1 is skipping the reservedPad field
- startCode = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idDelta = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idRangeOffset = allCodes[:segCount]
- glyphIndexArray = allCodes[segCount:]
- lenGIArray = len(glyphIndexArray)
-
- # build 2-byte character mapping
- charCodes = []
- gids = []
- for i in range(len(startCode) - 1): # don't do 0xffff!
- start = startCode[i]
- delta = idDelta[i]
- rangeOffset = idRangeOffset[i]
- partial = rangeOffset // 2 - start + i - len(idRangeOffset)
-
- rangeCharCodes = list(range(startCode[i], endCode[i] + 1))
- charCodes.extend(rangeCharCodes)
- if rangeOffset == 0:
- gids.extend(
- [(charCode + delta) & 0xFFFF for charCode in rangeCharCodes]
- )
- else:
- for charCode in rangeCharCodes:
- index = charCode + partial
- assert index < lenGIArray, (
- "In format 4 cmap, range (%d), the calculated index (%d) into the glyph index array is not less than the length of the array (%d) !"
- % (i, index, lenGIArray)
- )
- if glyphIndexArray[index] != 0: # if not missing glyph
- glyphID = glyphIndexArray[index] + delta
- else:
- glyphID = 0 # missing glyph
- gids.append(glyphID & 0xFFFF)
-
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
-
- charCodes = list(self.cmap.keys())
- if not charCodes:
- startCode = [0xFFFF]
- endCode = [0xFFFF]
- else:
- charCodes.sort()
- names = [self.cmap[code] for code in charCodes]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 4 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- # Build startCode and endCode lists.
- # Split the char codes in ranges of consecutive char codes, then split
- # each range in more ranges of consecutive/not consecutive glyph IDs.
- # See splitRange().
- lastCode = charCodes[0]
- endCode = []
- startCode = [lastCode]
- for charCode in charCodes[
- 1:
- ]: # skip the first code, it's the first start code
- if charCode == lastCode + 1:
- lastCode = charCode
- continue
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(charCode)
- lastCode = charCode
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(0xFFFF)
- endCode.append(0xFFFF)
-
- # build up rest of cruft
- idDelta = []
- idRangeOffset = []
- glyphIndexArray = []
- for i in range(len(endCode) - 1): # skip the closing codes (0xffff)
- indices = []
- for charCode in range(startCode[i], endCode[i] + 1):
- indices.append(cmap[charCode])
- if indices == list(range(indices[0], indices[0] + len(indices))):
- idDelta.append((indices[0] - startCode[i]) % 0x10000)
- idRangeOffset.append(0)
- else:
- idDelta.append(0)
- idRangeOffset.append(2 * (len(endCode) + len(glyphIndexArray) - i))
- glyphIndexArray.extend(indices)
- idDelta.append(1) # 0xffff + 1 == (tadaa!) 0. So this end code maps to .notdef
- idRangeOffset.append(0)
-
- # Insane.
- segCount = len(endCode)
- segCountX2 = segCount * 2
- searchRange, entrySelector, rangeShift = getSearchRange(segCount, 2)
-
- charCodeArray = array.array("H", endCode + [0] + startCode)
- idDeltaArray = array.array("H", idDelta)
- restArray = array.array("H", idRangeOffset + glyphIndexArray)
- if sys.byteorder != "big":
- charCodeArray.byteswap()
- if sys.byteorder != "big":
- idDeltaArray.byteswap()
- if sys.byteorder != "big":
- restArray.byteswap()
- data = charCodeArray.tobytes() + idDeltaArray.tobytes() + restArray.tobytes()
-
- length = struct.calcsize(cmap_format_4_format) + len(data)
- header = struct.pack(
- cmap_format_4_format,
- self.format,
- length,
- self.language,
- segCountX2,
- searchRange,
- entrySelector,
- rangeShift,
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- nameMap, attrsMap, dummyContent = element
- if nameMap != "map":
- assert 0, "Unrecognized keyword in cmap subtable"
- cmap[safeEval(attrsMap["code"])] = attrsMap["name"]
-
-
-class cmap_format_6(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- firstCode, entryCount = struct.unpack(">HH", data[:4])
- firstCode = int(firstCode)
- data = data[4:]
- # assert len(data) == 2 * entryCount # XXX not true in Apple's Helvetica!!!
- gids = array.array("H")
- gids.frombytes(data[: 2 * int(entryCount)])
- if sys.byteorder != "big":
- gids.byteswap()
- self.data = data = None
-
- charCodes = list(range(firstCode, firstCode + len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- cmap = self.cmap
- codes = sorted(cmap.keys())
- if codes: # yes, there are empty cmap tables.
- codes = list(range(codes[0], codes[-1] + 1))
- firstCode = codes[0]
- valueList = [
- ttFont.getGlyphID(cmap[code]) if code in cmap else 0 for code in codes
- ]
- gids = array.array("H", valueList)
- if sys.byteorder != "big":
- gids.byteswap()
- data = gids.tobytes()
- else:
- data = b""
- firstCode = 0
- header = struct.pack(
- ">HHHHH", 6, len(data) + 10, self.language, firstCode, len(codes)
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12_or_13(CmapSubtable):
- def __init__(self, format):
- self.format = format
- self.reserved = 0
- self.data = None
- self.ttFont = None
-
- def decompileHeader(self, data, ttFont):
- format, reserved, length, language, nGroups = struct.unpack(">HHLLL", data[:16])
- assert (
- len(data) == (16 + nGroups * 12) == (length)
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- self.format,
- len(data),
- length,
- )
- self.format = format
- self.reserved = reserved
- self.length = length
- self.language = language
- self.nGroups = nGroups
- self.data = data[16:]
- self.ttFont = ttFont
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- charCodes = []
- gids = []
- pos = 0
- for i in range(self.nGroups):
- startCharCode, endCharCode, glyphID = struct.unpack(
- ">LLL", data[pos : pos + 12]
- )
- pos += 12
- lenGroup = 1 + endCharCode - startCharCode
- charCodes.extend(list(range(startCharCode, endCharCode + 1)))
- gids.extend(self._computeGIDs(glyphID, lenGroup))
- self.data = data = None
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- self.length,
- self.language,
- self.nGroups,
- )
- + self.data
- )
- charCodes = list(self.cmap.keys())
- names = list(self.cmap.values())
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 12 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- charCodes.sort()
- index = 0
- startCharCode = charCodes[0]
- startGlyphID = cmap[startCharCode]
- lastGlyphID = startGlyphID - self._format_step
- lastCharCode = startCharCode - 1
- nGroups = 0
- dataList = []
- maxIndex = len(charCodes)
- for index in range(maxIndex):
- charCode = charCodes[index]
- glyphID = cmap[charCode]
- if not self._IsInSameRun(glyphID, lastGlyphID, charCode, lastCharCode):
- dataList.append(
- struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID)
- )
- startCharCode = charCode
- startGlyphID = glyphID
- nGroups = nGroups + 1
- lastGlyphID = glyphID
- lastCharCode = charCode
- dataList.append(struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID))
- nGroups = nGroups + 1
- data = bytesjoin(dataList)
- lengthSubtable = len(data) + 16
- assert len(data) == (nGroups * 12) == (lengthSubtable - 16)
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- lengthSubtable,
- self.language,
- nGroups,
- )
- + data
- )
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("format", self.format),
- ("reserved", self.reserved),
- ("length", self.length),
- ("language", self.language),
- ("nGroups", self.nGroups),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.format = safeEval(attrs["format"])
- self.reserved = safeEval(attrs["reserved"])
- self.length = safeEval(attrs["length"])
- self.language = safeEval(attrs["language"])
- self.nGroups = safeEval(attrs["nGroups"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12(cmap_format_12_or_13):
-
- _format_step = 1
-
- def __init__(self, format=12):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return list(range(startingGlyph, startingGlyph + numberOfGlyphs))
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == 1 + lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-class cmap_format_13(cmap_format_12_or_13):
-
- _format_step = 0
-
- def __init__(self, format=13):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return [startingGlyph] * numberOfGlyphs
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-def cvtToUVS(threeByteString):
- data = b"\0" + threeByteString
- (val,) = struct.unpack(">L", data)
- return val
-
-
-def cvtFromUVS(val):
- assert 0 <= val < 0x1000000
- fourByteString = struct.pack(">L", val)
- return fourByteString[1:]
-
-
-class cmap_format_14(CmapSubtable):
- def decompileHeader(self, data, ttFont):
- format, length, numVarSelectorRecords = struct.unpack(">HLL", data[:10])
- self.data = data[10:]
- self.length = length
- self.numVarSelectorRecords = numVarSelectorRecords
- self.ttFont = ttFont
- self.language = 0xFF # has no language.
-
- def decompile(self, data, ttFont):
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = self.data
-
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- uvsDict = {}
- recOffset = 0
- for n in range(self.numVarSelectorRecords):
- uvs, defOVSOffset, nonDefUVSOffset = struct.unpack(
- ">3sLL", data[recOffset : recOffset + 11]
- )
- recOffset += 11
- varUVS = cvtToUVS(uvs)
- if defOVSOffset:
- startOffset = defOVSOffset - 10
- (numValues,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- for r in range(numValues):
- uv, addtlCnt = struct.unpack(
- ">3sB", data[startOffset : startOffset + 4]
- )
- startOffset += 4
- firstBaseUV = cvtToUVS(uv)
- cnt = addtlCnt + 1
- baseUVList = list(range(firstBaseUV, firstBaseUV + cnt))
- glyphList = [None] * cnt
- localUVList = zip(baseUVList, glyphList)
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = list(localUVList)
-
- if nonDefUVSOffset:
- startOffset = nonDefUVSOffset - 10
- (numRecs,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- localUVList = []
- for r in range(numRecs):
- uv, gid = struct.unpack(">3sH", data[startOffset : startOffset + 5])
- startOffset += 5
- uv = cvtToUVS(uv)
- glyphName = self.ttFont.getGlyphName(gid)
- localUVList.append((uv, glyphName))
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = localUVList
-
- self.uvsDict = uvsDict
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- for uvs in uvsList:
- uvList = uvsDict[uvs]
- uvList.sort(key=lambda item: (item[1] is not None, item[0], item[1]))
- for uv, gname in uvList:
- attrs = [("uv", hex(uv)), ("uvs", hex(uvs))]
- if gname is not None:
- attrs.append(("name", gname))
- writer.simpletag("map", attrs)
- writer.newline()
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = 0xFF # provide a value so that CmapSubtable.__lt__() won't fail
- if not hasattr(self, "cmap"):
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- if not hasattr(self, "uvsDict"):
- self.uvsDict = {}
- uvsDict = self.uvsDict
-
- # For backwards compatibility reasons we accept "None" as an indicator
- # for "default mapping", unless the font actually has a glyph named
- # "None".
- _hasGlyphNamedNone = None
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- uvs = safeEval(attrs["uvs"])
- uv = safeEval(attrs["uv"])
- gname = attrs.get("name")
- if gname == "None":
- if _hasGlyphNamedNone is None:
- _hasGlyphNamedNone = "None" in ttFont.getGlyphOrder()
- if not _hasGlyphNamedNone:
- gname = None
- try:
- uvsDict[uvs].append((uv, gname))
- except KeyError:
- uvsDict[uvs] = [(uv, gname)]
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
- + self.data
- )
-
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- self.numVarSelectorRecords = len(uvsList)
- offset = (
- 10 + self.numVarSelectorRecords * 11
- ) # current value is end of VarSelectorRecords block.
- data = []
- varSelectorRecords = []
- for uvs in uvsList:
- entryList = uvsDict[uvs]
-
- defList = [entry for entry in entryList if entry[1] is None]
- if defList:
- defList = [entry[0] for entry in defList]
- defOVSOffset = offset
- defList.sort()
-
- lastUV = defList[0]
- cnt = -1
- defRecs = []
- for defEntry in defList:
- cnt += 1
- if (lastUV + cnt) != defEntry:
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt - 1)
- lastUV = defEntry
- defRecs.append(rec)
- cnt = 0
-
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt)
- defRecs.append(rec)
-
- numDefRecs = len(defRecs)
- data.append(struct.pack(">L", numDefRecs))
- data.extend(defRecs)
- offset += 4 + numDefRecs * 4
- else:
- defOVSOffset = 0
-
- ndefList = [entry for entry in entryList if entry[1] is not None]
- if ndefList:
- nonDefUVSOffset = offset
- ndefList.sort()
- numNonDefRecs = len(ndefList)
- data.append(struct.pack(">L", numNonDefRecs))
- offset += 4 + numNonDefRecs * 5
-
- for uv, gname in ndefList:
- gid = ttFont.getGlyphID(gname)
- ndrec = struct.pack(">3sH", cvtFromUVS(uv), gid)
- data.append(ndrec)
- else:
- nonDefUVSOffset = 0
-
- vrec = struct.pack(">3sLL", cvtFromUVS(uvs), defOVSOffset, nonDefUVSOffset)
- varSelectorRecords.append(vrec)
-
- data = bytesjoin(varSelectorRecords) + bytesjoin(data)
- self.length = 10 + len(data)
- headerdata = struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
-
- return headerdata + data
-
-
-class cmap_format_unknown(CmapSubtable):
- def toXML(self, writer, ttFont):
- cmapName = self.__class__.__name__[:12] + str(self.format)
- writer.begintag(
- cmapName,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- writer.dumphex(self.data)
- writer.endtag(cmapName)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.data = readHex(content)
- self.cmap = {}
-
- def decompileHeader(self, data, ttFont):
- self.language = 0 # dummy value
- self.data = data
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- def compile(self, ttFont):
- if self.data:
- return self.data
- else:
- return None
-
-
-cmap_classes = {
- 0: cmap_format_0,
- 2: cmap_format_2,
- 4: cmap_format_4,
- 6: cmap_format_6,
- 12: cmap_format_12,
- 13: cmap_format_13,
- 14: cmap_format_14,
-}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-e4d3547f.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-e4d3547f.js
deleted file mode 100644
index 0897a859bb0bc312b3cb111456e3b8e76bbe6c17..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-e4d3547f.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as z,e as E,s as K,a9 as L,N as p,P,O as B,K as q,L as k,U as j,p as w,M as v,Q,R,ab as M,ac as N,ad as O,z as g,v as b,A,k as C,o as h,x as S,E as T,ae as U,q as D,r as F}from"./index-1d65707a.js";import{B as G}from"./Button-f155035a.js";import{C as H}from"./Column-6c43afc7.js";/* empty css */function I(a){let e,l,t,s,o,u,n,f,d,_;const r=a[3].default,c=L(r,a,a[2],null);return{c(){e=p("div"),l=p("span"),t=P(a[1]),s=B(),o=p("span"),o.textContent="▼",u=B(),n=p("div"),c&&c.c(),q(l,"class","svelte-s1r2yt"),q(o,"class","icon svelte-s1r2yt"),k(o,"transform",a[0]?"rotate(0)":"rotate(90deg)"),q(e,"class","label-wrap svelte-s1r2yt"),j(e,"open",a[0]),k(n,"display",a[0]?"block":"none")},m(i,m){w(i,e,m),v(e,l),v(l,t),v(e,s),v(e,o),w(i,u,m),w(i,n,m),c&&c.m(n,null),f=!0,d||(_=Q(e,"click",a[4]),d=!0)},p(i,[m]){(!f||m&2)&&R(t,i[1]),m&1&&k(o,"transform",i[0]?"rotate(0)":"rotate(90deg)"),(!f||m&1)&&j(e,"open",i[0]),c&&c.p&&(!f||m&4)&&M(c,r,i,i[2],f?O(r,i[2],m,null):N(i[2]),null),m&1&&k(n,"display",i[0]?"block":"none")},i(i){f||(g(c,i),f=!0)},o(i){b(c,i),f=!1},d(i){i&&(A(e),A(u),A(n)),c&&c.d(i),d=!1,_()}}}function J(a,e,l){let{$$slots:t={},$$scope:s}=e,{label:o=""}=e,{open:u=!0}=e;const n=()=>l(0,u=!u);return a.$$set=f=>{"label"in f&&l(1,o=f.label),"open"in f&&l(0,u=f.open),"$$scope"in f&&l(2,s=f.$$scope)},[u,o,s,t,n]}class V extends z{constructor(e){super(),E(this,e,J,I,K,{label:1,open:0})}}function W(a){let e;const l=a[6].default,t=L(l,a,a[7],null);return{c(){t&&t.c()},m(s,o){t&&t.m(s,o),e=!0},p(s,o){t&&t.p&&(!e||o&128)&&M(t,l,s,s[7],e?O(l,s[7],o,null):N(s[7]),null)},i(s){e||(g(t,s),e=!0)},o(s){b(t,s),e=!1},d(s){t&&t.d(s)}}}function X(a){let e,l;return e=new H({props:{$$slots:{default:[W]},$$scope:{ctx:a}}}),{c(){C(e.$$.fragment)},m(t,s){h(e,t,s),l=!0},p(t,s){const o={};s&128&&(o.$$scope={dirty:s,ctx:t}),e.$set(o)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){b(e.$$.fragment,t),l=!1},d(t){S(e,t)}}}function Y(a){let e,l,t,s;const o=[a[5]];let u={};for(let n=0;n{"label"in r&&l(0,o=r.label),"elem_id"in r&&l(1,u=r.elem_id),"elem_classes"in r&&l(2,n=r.elem_classes),"visible"in r&&l(3,f=r.visible),"open"in r&&l(4,d=r.open),"loading_status"in r&&l(5,_=r.loading_status),"$$scope"in r&&l(7,s=r.$$scope)},[o,u,n,f,d,_,t,s]}class y extends z{constructor(e){super(),E(this,e,$,Z,K,{label:0,elem_id:1,elem_classes:2,visible:3,open:4,loading_status:5})}}const le=y,ne=["static"];export{le as Component,ne as modes};
-//# sourceMappingURL=index-e4d3547f.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-55cc7e8b.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-55cc7e8b.js
deleted file mode 100644
index 404dba37de683e1e2e14cc7ad6ab5922deb12278..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-55cc7e8b.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as P,e as Q,s as R,G as J,k as B,O as G,N as q,K as k,o as O,p as w,z as S,v as N,A as C,x as T,V as Y,B as Z,am as y,P as V,R as H,U as j,M as v,Q as U,a1 as p,E as x,ae as $,h as z,j as D,q as ee,r as le,t as F,F as E}from"./index-3370be2a.js";/* empty css */import{B as te}from"./Button-89624748.js";import{B as ne}from"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";function K(l,e,n){const t=l.slice();return t[13]=e[n],t}function ie(l){let e;return{c(){e=V(l[3])},m(n,t){w(n,e,t)},p(n,t){t&8&&H(e,n[3])},d(n){n&&C(e)}}}function M(l){let e,n,t,f,c,u=l[13]+"",i,h,b,d;function m(){return l[10](l[13])}function s(..._){return l[11](l[13],..._)}return{c(){e=q("label"),n=q("input"),f=G(),c=q("span"),i=V(u),h=G(),n.disabled=l[2],n.checked=t=l[0].includes(l[13]),k(n,"type","checkbox"),k(n,"name","test"),k(n,"class","svelte-1qxcj04"),k(c,"class","ml-2 svelte-1qxcj04"),k(e,"class","svelte-1qxcj04"),j(e,"disabled",l[2]),j(e,"selected",l[0].includes(l[13]))},m(_,r){w(_,e,r),v(e,n),v(e,f),v(e,c),v(c,i),v(e,h),b||(d=[U(n,"change",m),U(n,"input",s)],b=!0)},p(_,r){l=_,r&4&&(n.disabled=l[2]),r&3&&t!==(t=l[0].includes(l[13]))&&(n.checked=t),r&2&&u!==(u=l[13]+"")&&H(i,u),r&4&&j(e,"disabled",l[2]),r&3&&j(e,"selected",l[0].includes(l[13]))},d(_){_&&C(e),b=!1,p(d)}}}function se(l){let e,n,t,f;e=new ne({props:{show_label:l[5],info:l[4],$$slots:{default:[ie]},$$scope:{ctx:l}}});let c=J(l[1]),u=[];for(let i=0;i{t.includes(o)?t.splice(t.indexOf(o),1):t.push(o),n(0,t)};function _(){m("change",t),c||m("input")}y(()=>{n(8,c=!1)});const r=o=>s(o),g=(o,A)=>m("select",{index:u.indexOf(o),value:o,selected:A.currentTarget.checked});return l.$$set=o=>{"value"in o&&n(0,t=o.value),"value_is_output"in o&&n(8,c=o.value_is_output),"choices"in o&&n(1,u=o.choices),"disabled"in o&&n(2,i=o.disabled),"label"in o&&n(3,h=o.label),"info"in o&&n(4,b=o.info),"show_label"in o&&n(5,d=o.show_label)},l.$$.update=()=>{l.$$.dirty&513&&JSON.stringify(t)!==JSON.stringify(f)&&(n(9,f=t.slice()),_())},[t,u,i,h,b,d,m,s,c,f,r,g]}class ue extends P{constructor(e){super(),Q(this,e,ae,se,R,{value:0,value_is_output:8,choices:1,disabled:2,label:3,info:4,show_label:5})}}function ce(l){let e,n,t,f,c,u;const i=[l[13]];let h={};for(let s=0;sD(t,"value",b)),z.push(()=>D(t,"value_is_output",d)),t.$on("select",l[16]),t.$on("change",l[17]),t.$on("input",l[18]),{c(){B(e.$$.fragment),n=G(),B(t.$$.fragment)},m(s,_){O(e,s,_),w(s,n,_),O(t,s,_),u=!0},p(s,_){const r=_&8192?ee(i,[le(s[13])]):{};e.$set(r);const g={};_&32&&(g.choices=s[5]),_&1024&&(g.label=s[10]),_&2048&&(g.info=s[11]),_&4096&&(g.show_label=s[12]),_&512&&(g.disabled=s[9]==="static"),!f&&_&1&&(f=!0,g.value=s[0],F(()=>f=!1)),!c&&_&2&&(c=!0,g.value_is_output=s[1],F(()=>c=!1)),t.$set(g)},i(s){u||(S(e.$$.fragment,s),S(t.$$.fragment,s),u=!0)},o(s){N(e.$$.fragment,s),N(t.$$.fragment,s),u=!1},d(s){s&&C(n),T(e,s),T(t,s)}}}function fe(l){let e,n;return e=new te({props:{visible:l[4],elem_id:l[2],elem_classes:l[3],type:"fieldset",container:l[6],scale:l[7],min_width:l[8],$$slots:{default:[ce]},$$scope:{ctx:l}}}),{c(){B(e.$$.fragment)},m(t,f){O(e,t,f),n=!0},p(t,[f]){const c={};f&16&&(c.visible=t[4]),f&4&&(c.elem_id=t[2]),f&8&&(c.elem_classes=t[3]),f&64&&(c.container=t[6]),f&128&&(c.scale=t[7]),f&256&&(c.min_width=t[8]),f&540195&&(c.$$scope={dirty:f,ctx:t}),e.$set(c)},i(t){n||(S(e.$$.fragment,t),n=!0)},o(t){N(e.$$.fragment,t),n=!1},d(t){T(e,t)}}}function oe(l,e,n){let{elem_id:t=""}=e,{elem_classes:f=[]}=e,{visible:c=!0}=e,{value:u=[]}=e,{value_is_output:i=!1}=e,{choices:h}=e,{container:b=!0}=e,{scale:d=null}=e,{min_width:m=void 0}=e,{mode:s}=e,{label:_="Checkbox Group"}=e,{info:r=void 0}=e,{show_label:g}=e,{loading_status:o}=e;function A(a){u=a,n(0,u)}function I(a){i=a,n(1,i)}function L(a){E.call(this,l,a)}function W(a){E.call(this,l,a)}function X(a){E.call(this,l,a)}return l.$$set=a=>{"elem_id"in a&&n(2,t=a.elem_id),"elem_classes"in a&&n(3,f=a.elem_classes),"visible"in a&&n(4,c=a.visible),"value"in a&&n(0,u=a.value),"value_is_output"in a&&n(1,i=a.value_is_output),"choices"in a&&n(5,h=a.choices),"container"in a&&n(6,b=a.container),"scale"in a&&n(7,d=a.scale),"min_width"in a&&n(8,m=a.min_width),"mode"in a&&n(9,s=a.mode),"label"in a&&n(10,_=a.label),"info"in a&&n(11,r=a.info),"show_label"in a&&n(12,g=a.show_label),"loading_status"in a&&n(13,o=a.loading_status)},[u,i,t,f,c,h,b,d,m,s,_,r,g,o,A,I,L,W,X]}class _e extends P{constructor(e){super(),Q(this,e,oe,fe,R,{elem_id:2,elem_classes:3,visible:4,value:0,value_is_output:1,choices:5,container:6,scale:7,min_width:8,mode:9,label:10,info:11,show_label:12,loading_status:13})}}const ge=_e,ke=["static","dynamic"],ve=l=>({type:{payload:"Array"},description:{payload:"list of selected choices"},example_data:l.choices.length?[l.choices[0]]:[]});export{ge as Component,ve as document,ke as modes};
-//# sourceMappingURL=index-55cc7e8b.js.map
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-a0f79f16.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-a0f79f16.js
deleted file mode 100644
index 3db4efc380c2adcf6bc1662d1ccc2700c268533a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-a0f79f16.js
+++ /dev/null
@@ -1,3 +0,0 @@
-import{S as F,e as G,s as J,J as Z,K as _,p as k,M as w,n as A,A as v,N as T,B as oe,av as P,G as N,O as C,V as K,P as I,R as q,Q as S,a1 as ue,U as j,L,k as O,o as V,z as H,v as M,x as z,E as _e,ae as me,m as de,q as ge,r as he,u as Q,y as U,F as be}from"./index-3370be2a.js";import{g as ke}from"./color-baaf9df5.js";import{B as ve}from"./Button-89624748.js";import{B as pe}from"./BlockLabel-56db415e.js";import{E as we}from"./Empty-585389a4.js";function ye(t){let e,n,l;return{c(){e=Z("svg"),n=Z("path"),l=Z("path"),_(n,"fill","currentColor"),_(n,"d","M12 15H5a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5V5a1 1 0 0 0-1-1H3V2h6a3 3 0 0 1 3 3zM5 9a1 1 0 0 0-1 1v2a1 1 0 0 0 1 1h5V9zm15 14v2a1 1 0 0 0 1 1h5v-4h-5a1 1 0 0 0-1 1z"),_(l,"fill","currentColor"),_(l,"d","M2 30h28V2Zm26-2h-7a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5v-2a1 1 0 0 0-1-1h-6v-2h6a3 3 0 0 1 3 3Z"),_(e,"xmlns","http://www.w3.org/2000/svg"),_(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),_(e,"aria-hidden","true"),_(e,"role","img"),_(e,"class","iconify iconify--carbon"),_(e,"width","100%"),_(e,"height","100%"),_(e,"preserveAspectRatio","xMidYMid meet"),_(e,"viewBox","0 0 32 32")},m(s,o){k(s,e,o),w(e,n),w(e,l)},p:A,i:A,o:A,d(s){s&&v(e)}}}class ae extends F{constructor(e){super(),G(this,e,null,ye,J,{})}}function Y(t,e,n){const l=t.slice();l[18]=e[n][0],l[24]=e[n][1];const s=typeof l[24]=="string"?parseInt(l[24]):l[24];return l[25]=s,l}function W(t,e,n){const l=t.slice();return l[18]=e[n][0],l[19]=e[n][1],l[21]=n,l}function X(t,e,n){const l=t.slice();return l[19]=e[n][0],l[22]=e[n][1],l[21]=n,l}function He(t){let e,n,l=t[1]&&x(),s=N(t[0]),o=[];for(let a=0;a-1 0 +1 ",_(e,"class","color-legend svelte-19on2m6"),_(e,"data-testid","highlighted-text:color-legend")},m(n,l){k(n,e,l)},d(n){n&&v(e)}}}function $(t){let e,n,l=t[18]+"",s,o,a;return{c(){e=T("span"),n=T("span"),s=I(l),o=C(),_(n,"class","text svelte-19on2m6"),_(e,"class","textspan score-text svelte-19on2m6"),_(e,"style",a="background-color: rgba("+(t[25]<0?"128, 90, 213,"+-t[25]:"239, 68, 60,"+t[25])+")")},m(r,i){k(r,e,i),w(e,n),w(n,s),w(e,o)},p(r,i){i&1&&l!==(l=r[18]+"")&&q(s,l),i&1&&a!==(a="background-color: rgba("+(r[25]<0?"128, 90, 213,"+-r[25]:"239, 68, 60,"+r[25])+")")&&_(e,"style",a)},d(r){r&&v(e)}}}function ee(t){let e,n=N(Object.entries(t[3])),l=[];for(let s=0;sc(h),E=h=>c(h),m=()=>g(),ie=()=>g(),re=(h,p,y)=>{d("select",{index:h,value:[p,y]})};return t.$$set=h=>{"value"in h&&n(0,s=h.value),"show_legend"in h&&n(1,o=h.show_legend),"color_map"in h&&n(9,a=h.color_map),"selectable"in h&&n(2,r=h.selectable)},t.$$.update=()=>{if(t.$$.dirty&513){let h=function(){for(const p in a){const y=a[p].trim();y in P?n(3,f[p]=P[y],f):n(3,f[p]={primary:l?b(a[p],1):a[p],secondary:l?b(a[p],.5):a[p]},f)}};if(a||n(9,a={}),s.length>0){for(let[p,y]of s)if(y!==null)if(typeof y=="string"){if(n(5,B="categories"),!(y in a)){let D=ke(Object.keys(a).length);n(9,a[y]=D,a)}}else n(5,B="scores")}h()}},[s,o,r,f,u,B,d,c,g,a,R,E,m,ie,re]}class je extends F{constructor(e){super(),G(this,e,Be,Me,J,{value:0,show_legend:1,color_map:9,selectable:2})}}function se(t){let e,n;return e=new pe({props:{Icon:ae,label:t[6],float:!1,disable:t[7]===!1}}),{c(){O(e.$$.fragment)},m(l,s){V(e,l,s),n=!0},p(l,s){const o={};s&64&&(o.label=l[6]),s&128&&(o.disable=l[7]===!1),e.$set(o)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){M(e.$$.fragment,l),n=!1},d(l){z(e,l)}}}function Ce(t){let e,n;return e=new we({props:{$$slots:{default:[Ne]},$$scope:{ctx:t}}}),{c(){O(e.$$.fragment)},m(l,s){V(e,l,s),n=!0},p(l,s){const o={};s&32768&&(o.$$scope={dirty:s,ctx:l}),e.$set(o)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){M(e.$$.fragment,l),n=!1},d(l){z(e,l)}}}function Ee(t){let e,n;return e=new je({props:{selectable:t[10],value:t[4],show_legend:t[5],color_map:t[0]}}),e.$on("select",t[13]),{c(){O(e.$$.fragment)},m(l,s){V(e,l,s),n=!0},p(l,s){const o={};s&1024&&(o.selectable=l[10]),s&16&&(o.value=l[4]),s&32&&(o.show_legend=l[5]),s&1&&(o.color_map=l[0]),e.$set(o)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){M(e.$$.fragment,l),n=!1},d(l){z(e,l)}}}function Ne(t){let e,n;return e=new ae({}),{c(){O(e.$$.fragment)},m(l,s){V(e,l,s),n=!0},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){M(e.$$.fragment,l),n=!1},d(l){z(e,l)}}}function Oe(t){let e,n,l,s,o,a,r;const i=[t[11]];let f={};for(let c=0;c{u=null}),U());let E=s;s=B(c),s===E?d[s].p(c,g):(Q(),M(d[E],1,1,()=>{d[E]=null}),U(),o=d[s],o?o.p(c,g):(o=d[s]=b[s](c),o.c()),H(o,1),o.m(a.parentNode,a))},i(c){r||(H(e.$$.fragment,c),H(u),H(o),r=!0)},o(c){M(e.$$.fragment,c),M(u),M(o),r=!1},d(c){c&&(v(n),v(l),v(a)),z(e,c),u&&u.d(c),d[s].d(c)}}}function Ve(t){let e,n;return e=new ve({props:{test_id:"highlighted-text",visible:t[3],elem_id:t[1],elem_classes:t[2],padding:!1,container:t[7],scale:t[8],min_width:t[9],$$slots:{default:[Oe]},$$scope:{ctx:t}}}),{c(){O(e.$$.fragment)},m(l,s){V(e,l,s),n=!0},p(l,[s]){const o={};s&8&&(o.visible=l[3]),s&2&&(o.elem_id=l[1]),s&4&&(o.elem_classes=l[2]),s&128&&(o.container=l[7]),s&256&&(o.scale=l[8]),s&512&&(o.min_width=l[9]),s&36081&&(o.$$scope={dirty:s,ctx:l}),e.$set(o)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){M(e.$$.fragment,l),n=!1},d(l){z(e,l)}}}function ze(t,e,n){let{elem_id:l=""}=e,{elem_classes:s=[]}=e,{visible:o=!0}=e,{value:a}=e,r,{show_legend:i}=e,{color_map:f={}}=e,{label:u="Highlighted Text"}=e,{container:b=!0}=e,{scale:d=null}=e,{min_width:B=void 0}=e,{selectable:c=!1}=e,{loading_status:g}=e;const R=oe();function E(m){be.call(this,t,m)}return t.$$set=m=>{"elem_id"in m&&n(1,l=m.elem_id),"elem_classes"in m&&n(2,s=m.elem_classes),"visible"in m&&n(3,o=m.visible),"value"in m&&n(4,a=m.value),"show_legend"in m&&n(5,i=m.show_legend),"color_map"in m&&n(0,f=m.color_map),"label"in m&&n(6,u=m.label),"container"in m&&n(7,b=m.container),"scale"in m&&n(8,d=m.scale),"min_width"in m&&n(9,B=m.min_width),"selectable"in m&&n(10,c=m.selectable),"loading_status"in m&&n(11,g=m.loading_status)},t.$$.update=()=>{t.$$.dirty&1&&!f&&Object.keys(f).length&&n(0,f),t.$$.dirty&4112&&a!==r&&(n(12,r=a),R("change"))},[f,l,s,o,a,i,u,b,d,B,c,g,r,E]}class Re extends F{constructor(e){super(),G(this,e,ze,Ve,J,{elem_id:1,elem_classes:2,visible:3,value:4,show_legend:5,color_map:0,label:6,container:7,scale:8,min_width:9,selectable:10,loading_status:11})}}const De=Re,Ze=["static"],Fe=t=>({type:{payload:"Array<[string, string | number]>"},description:{payload:"list of text spans and corresponding label / value"}});export{De as Component,Fe as document,Ze as modes};
-//# sourceMappingURL=index-a0f79f16.js.map
diff --git a/spaces/DaleChen/AutoGPT/autogpt/js/overlay.js b/spaces/DaleChen/AutoGPT/autogpt/js/overlay.js
deleted file mode 100644
index 1c99c72673330b8ea8cf037ef889233f2d4326be..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/js/overlay.js
+++ /dev/null
@@ -1,29 +0,0 @@
-const overlay = document.createElement('div');
-Object.assign(overlay.style, {
- position: 'fixed',
- zIndex: 999999,
- top: 0,
- left: 0,
- width: '100%',
- height: '100%',
- background: 'rgba(0, 0, 0, 0.7)',
- color: '#fff',
- fontSize: '24px',
- fontWeight: 'bold',
- display: 'flex',
- justifyContent: 'center',
- alignItems: 'center',
-});
-const textContent = document.createElement('div');
-Object.assign(textContent.style, {
- textAlign: 'center',
-});
-textContent.textContent = 'AutoGPT Analyzing Page';
-overlay.appendChild(textContent);
-document.body.append(overlay);
-document.body.style.overflow = 'hidden';
-let dotCount = 0;
-setInterval(() => {
- textContent.textContent = 'AutoGPT Analyzing Page' + '.'.repeat(dotCount);
- dotCount = (dotCount + 1) % 4;
-}, 1000);
diff --git a/spaces/Danil/AnyNameHack/README.md b/spaces/Danil/AnyNameHack/README.md
deleted file mode 100644
index e84214a4bffacff8b215606f55ed02bbd31f533d..0000000000000000000000000000000000000000
--- a/spaces/Danil/AnyNameHack/README.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: AnyNameHack
-emoji: 🔥
-colorFrom: purple
-colorTo: gray
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-
-`emoji`: _string_
-
-Space emoji (emoji-only character allowed)
-
-
-`colorFrom`: _string_
-
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-
-`colorTo`: _string_
-
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-
-`sdk`: _string_
-
-Can be either `gradio` or `streamlit`
-
-
-`sdk_version` : _string_
-
-Only applicable for `streamlit` SDK.
-
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-
-`app_file`: _string_
-
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-
-Path is relative to the root of the repository.
-
-
-`pinned`: _boolean_
-
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/DebasishDhal99/Youtube_Playlist/average_duration.py b/spaces/DebasishDhal99/Youtube_Playlist/average_duration.py
deleted file mode 100644
index 38ad5961683453fae2ab3983af2c005006410943..0000000000000000000000000000000000000000
--- a/spaces/DebasishDhal99/Youtube_Playlist/average_duration.py
+++ /dev/null
@@ -1,157 +0,0 @@
-#import pyyoutube
-from datetime import timedelta
-#from pyyoutube import playlist
-import re
-import gradio as gr
-from urllib.parse import urlparse, parse_qs
-from contextlib import suppress
-import os
-
-api_key = os.getenv("api_key_secret")
-import googleapiclient
-from googleapiclient.discovery import build
-from googleapiclient.errors import HttpError
-import datetime
-youtube = build('youtube', 'v3', developerKey=api_key)
-
-def playlist_average_duration_func(youtubelink,videoid=False):
-
- def playlist_exist_check(playlistlink):
-
- def extract_playlist_id(playlistlink):
- match = re.search(r'list=([^&]+)', playlistlink) #It searches for the string 'list=' followed by >=1 characters that are not '&'.
- if match:
- return match.group(1)
- return None
-
- playlist_id = extract_playlist_id(playlistlink)
-
- if playlist_id is None:
- return False
-
- search_request = youtube.playlists().list(
-
- part='id',
- id=playlist_id,
- maxResults=1
- )
-
- search_response = search_request.execute()
- if 'items' in search_response:
- try:
- playlistdict = search_response['items'][0]
- print("ID of playlist is:- ",playlistdict['id'])
- return playlistdict['id']
- except:
- #print("Video not found.")
- return False
-
- playlistid = playlist_exist_check(youtubelink)
- if playlistid == False or playlistid==None:
- print("Playlist doesn't exist")
- return False
- print("1st check passed - Playlist link is valid")
-
-
-
-
-#This section retrieves the video ids of all the videos in the playlist, and stores them in a list. 50 in one iteration.
-
- vid_ids = []
- next_page_token = None
- while True:
-
-
- pl_request = youtube.playlistItems().list(
- part="contentDetails,snippet",
- playlistId=playlistid,
- maxResults=50, #This is the max limit of videos that can be fetched in one go form a playlist as youtube data v3 API results are paginated
- pageToken=next_page_token
- )
- pl_response = pl_request.execute()
- # print("Reponse obtained from youtube")
-
-
-
- for item in pl_response['items']:
- vid_id = item['contentDetails']['videoId']
- vid_ids.append(vid_id)
- if videoid==True:
- print(item['contentDetails']['videoId'])
-
- next_page_token = pl_response.get("nextPageToken")
- if not next_page_token:
- break
- print("2nd check passed - Playlist read")
-
-
-
-#This section obtains the playlist name from the playlist id
- pl_request = youtube.playlists().list(
- part="snippet",
- id=playlistid,
- maxResults=1
- )
- pl_response = pl_request.execute()
- playlist = pl_response['items'][0]
- title = playlist['snippet']['title']
- print("Playlist Title:", title)
-
-
-
-
-
-
- # title = playlist['snippet']['title']
- # print("Playlist Title:", title)
-#This section retrieves the duration of each video in the playlist, and stores them in a list. 50 in one iteration
-
-
- iterations = len(vid_ids)//50+1
- duration_list = []
- for i in range(iterations):
- start_index = i * 50
- end_index = (i + 1) * 50
- batch_ids = vid_ids[start_index:end_index]
- vid_request = youtube.videos().list(
- part="contentDetails",
- id=','.join(batch_ids)
- )
-
- vid_response = vid_request.execute()
-
-
- for item in vid_response['items']:
- duration = item['contentDetails']['duration']
- duration = duration[2:]
- hours = 0
- minutes = 0
- seconds = 0
-
- if "H" in duration:
- hours_index = duration.index("H")
- hours = int(duration[:hours_index])
- duration = duration[hours_index+1:]
-
- if "M" in duration:
- minutes_index = duration.index("M")
- minutes = int(duration[:minutes_index])
- duration = duration[minutes_index+1:]
-
- if "S" in duration:
- seconds_index = duration.index("S")
- seconds = int(duration[:seconds_index])
-
- duration = timedelta(hours=hours, minutes=minutes, seconds=seconds)
- duration_list.append(duration)
- print("3rd check passed - Individual video duration calculated")
- total_duration = sum(duration_list, timedelta())
- #Find the average duration of each video in the playlist
- average_duration = total_duration/len(vid_ids)
- print("Total duration of playlist is:- ",total_duration)
- print("Total no. of videos is = ",len(vid_ids))
- print("Average duration of each video is:- ",average_duration)
- #Convert the average suration into HH:MM:SS format
- # average_duration_format = s
- return str(average_duration)
-
diff --git a/spaces/Docfile/open_llm_leaderboard/src/rate_limiting.py b/spaces/Docfile/open_llm_leaderboard/src/rate_limiting.py
deleted file mode 100644
index f07af954c87268c53e1ef787971d1e14df3a3f7f..0000000000000000000000000000000000000000
--- a/spaces/Docfile/open_llm_leaderboard/src/rate_limiting.py
+++ /dev/null
@@ -1,16 +0,0 @@
-
-from datetime import datetime, timezone, timedelta
-
-
-def user_submission_permission(submission_name, users_to_submission_dates, rate_limit_period):
- org_or_user, _ = submission_name.split("/")
- if org_or_user not in users_to_submission_dates:
- return 0
- submission_dates = sorted(users_to_submission_dates[org_or_user])
-
- time_limit = (datetime.now(timezone.utc) - timedelta(days=rate_limit_period)).strftime("%Y-%m-%dT%H:%M:%SZ")
- submissions_after_timelimit = [d for d in submission_dates if d > time_limit]
-
- return len(submissions_after_timelimit)
-
-
diff --git a/spaces/ECCV2022/ECCV2022_papers/style.css b/spaces/ECCV2022/ECCV2022_papers/style.css
deleted file mode 100644
index e2b871457d13980ddfbbc35bf5da02a75ece292e..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/ECCV2022_papers/style.css
+++ /dev/null
@@ -1,22 +0,0 @@
-h1 {
- text-align: center;
-}
-table a {
- background-color: transparent;
- color: #58a6ff;
- text-decoration: none;
-}
-a:active,
-a:hover {
- outline-width: 0;
-}
-a:hover {
- text-decoration: underline;
-}
-table, th, td {
- border: 1px solid;
-}
-img#visitor-badge {
- display: block;
- margin: auto;
-}
diff --git a/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/__init__.py b/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/__init__.py
deleted file mode 100644
index fda3c4069c40998ab3e4db549e27d03d28212496..0000000000000000000000000000000000000000
--- a/spaces/Epoching/DocumentQA/DiT_Extractor/dit_object_detection/ditod/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# --------------------------------------------------------------------------------
-# MPViT: Multi-Path Vision Transformer for Dense Prediction
-# Copyright (c) 2022 Electronics and Telecommunications Research Institute (ETRI).
-# All Rights Reserved.
-# Written by Youngwan Lee
-# This source code is licensed(Dual License(GPL3.0 & Commercial)) under the license found in the
-# LICENSE file in the root directory of this source tree.
-# --------------------------------------------------------------------------------
-
-from .config import add_vit_config
-from .backbone import build_vit_fpn_backbone
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/abinet/abinet_academic.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/abinet/abinet_academic.py
deleted file mode 100644
index 4abb87a6ee576a6c8a299d30baf4fee2ae56a1bf..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/abinet/abinet_academic.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_20e.py',
- '../../_base_/recog_pipelines/abinet_pipeline.py',
- '../../_base_/recog_models/abinet.py',
- # '../../_base_/recog_datasets/ST_MJ_alphanumeric_train.py',
- '../../_base_/recog_datasets/toy_data.py'
- # '../../_base_/recog_datasets/academic_test.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- samples_per_gpu=192,
- workers_per_gpu=8,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/transformer.py b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/transformer.py
deleted file mode 100644
index 4cc8216b1448747f9552662edf88d87e17827c5d..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/models/transformer.py
+++ /dev/null
@@ -1,287 +0,0 @@
-#!/usr/bin/env python3
-# Portions Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# Code modified from
-# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py ;
-# https://github.com/facebookresearch/deit/blob/main/models.py
-# and https://github.com/facebookresearch/vissl/blob/main/vissl/models/trunks/vision_transformer.py
-
-
-from functools import partial
-from typing import Callable, List, Optional
-
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, trunc_normal_
-
-
-class Attention(nn.Module):
- def __init__(
- self,
- dim,
- num_heads=8,
- qkv_bias=False,
- qk_scale=None,
- attn_drop=0.0,
- proj_drop=0.0,
- ):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version,
- # can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim**-0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- qkv = (
- self.qkv(x)
- .reshape(B, N, 3, self.num_heads, C // self.num_heads)
- .permute(2, 0, 3, 1, 4)
- )
- q, k, v = (
- qkv[0],
- qkv[1],
- qkv[2],
- ) # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class Mlp(nn.Module):
- def __init__(
- self,
- in_features,
- hidden_features=None,
- out_features=None,
- act_layer=nn.GELU,
- drop=0.0,
- ):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class MultiheadAttention(nn.MultiheadAttention):
- def forward(self, x: torch.Tensor, attn_mask: torch.Tensor):
- return super().forward(x, x, x, need_weights=False, attn_mask=attn_mask)[0]
-
-
-class ViTAttention(Attention):
- def forward(self, x: torch.Tensor, attn_mask: torch.Tensor):
- assert attn_mask is None
- return super().forward(x)
-
-
-class BlockWithMasking(nn.Module):
- def __init__(
- self,
- dim: int,
- attn_target: Callable,
- mlp_ratio: int = 4,
- act_layer: Callable = nn.GELU,
- norm_layer: Callable = nn.LayerNorm,
- ffn_dropout_rate: float = 0.0,
- drop_path: float = 0.0,
- layer_scale_type: Optional[str] = None,
- layer_scale_init_value: float = 1e-4,
- ):
- super().__init__()
-
- assert not isinstance(
- attn_target, nn.Module
- ), "attn_target should be a Callable. Otherwise attn_target is shared across blocks!"
- self.attn = attn_target()
- if drop_path > 0.0:
- self.drop_path = DropPath(drop_path)
- else:
- self.drop_path = nn.Identity()
- self.norm_1 = norm_layer(dim)
- mlp_hidden_dim = int(mlp_ratio * dim)
- self.mlp = Mlp(
- in_features=dim,
- hidden_features=mlp_hidden_dim,
- act_layer=act_layer,
- drop=ffn_dropout_rate,
- )
- self.norm_2 = norm_layer(dim)
- self.layer_scale_type = layer_scale_type
- if self.layer_scale_type is not None:
- assert self.layer_scale_type in [
- "per_channel",
- "scalar",
- ], f"Found Layer scale type {self.layer_scale_type}"
- if self.layer_scale_type == "per_channel":
- # one gamma value per channel
- gamma_shape = [1, 1, dim]
- elif self.layer_scale_type == "scalar":
- # single gamma value for all channels
- gamma_shape = [1, 1, 1]
- # two gammas: for each part of the fwd in the encoder
- self.layer_scale_gamma1 = nn.Parameter(
- torch.ones(size=gamma_shape) * layer_scale_init_value,
- requires_grad=True,
- )
- self.layer_scale_gamma2 = nn.Parameter(
- torch.ones(size=gamma_shape) * layer_scale_init_value,
- requires_grad=True,
- )
-
- def forward(self, x: torch.Tensor, attn_mask: torch.Tensor):
- if self.layer_scale_type is None:
- x = x + self.drop_path(self.attn(self.norm_1(x), attn_mask))
- x = x + self.drop_path(self.mlp(self.norm_2(x)))
- else:
- x = (
- x
- + self.drop_path(self.attn(self.norm_1(x), attn_mask))
- # * self.layer_scale_gamma1
- )
- x = x + self.drop_path(self.mlp(self.norm_2(x))) # * self.layer_scale_gamma2
- return x
-
-
-_LAYER_NORM = partial(nn.LayerNorm, eps=1e-6)
-
-
-class SimpleTransformer(nn.Module):
- def __init__(
- self,
- attn_target: Callable,
- embed_dim: int,
- num_blocks: int,
- block: Callable = BlockWithMasking,
- pre_transformer_layer: Optional[Callable] = None,
- post_transformer_layer: Optional[Callable] = None,
- drop_path_rate: float = 0.0,
- drop_path_type: str = "progressive",
- norm_layer: Callable = _LAYER_NORM,
- mlp_ratio: int = 4,
- ffn_dropout_rate: float = 0.0,
- layer_scale_type: Optional[str] = None, # from cait; possible values are None, "per_channel", "scalar"
- layer_scale_init_value: float = 1e-4, # from cait; float
- weight_init_style: str = "jax", # possible values jax or pytorch
- ):
- """
- Simple Transformer with the following features
- 1. Supports masked attention
- 2. Supports DropPath
- 3. Supports LayerScale
- 4. Supports Dropout in Attention and FFN
- 5. Makes few assumptions about the input except that it is a Tensor
- """
- super().__init__()
- self.pre_transformer_layer = pre_transformer_layer
- if drop_path_type == "progressive":
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, num_blocks)]
- elif drop_path_type == "uniform":
- dpr = [drop_path_rate for i in range(num_blocks)]
- else:
- raise ValueError(f"Unknown drop_path_type: {drop_path_type}")
-
- self.blocks = nn.Sequential(
- *[
- block(
- dim=embed_dim,
- attn_target=attn_target,
- mlp_ratio=mlp_ratio,
- ffn_dropout_rate=ffn_dropout_rate,
- drop_path=dpr[i],
- norm_layer=norm_layer,
- layer_scale_type=layer_scale_type,
- layer_scale_init_value=layer_scale_init_value,
- )
- for i in range(num_blocks)
- ]
- )
- self.post_transformer_layer = post_transformer_layer
- self.weight_init_style = weight_init_style
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- if self.weight_init_style == "jax":
- # Based on MAE and official Jax ViT implementation
- torch.nn.init.xavier_uniform_(m.weight)
- elif self.weight_init_style == "pytorch":
- # PyTorch ViT uses trunc_normal_
- trunc_normal_(m.weight, std=0.02)
-
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, (nn.LayerNorm)):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def forward(
- self,
- tokens: torch.Tensor,
- attn_mask: torch.Tensor = None,
- use_checkpoint: bool = False,
- checkpoint_every_n: int = 1,
- checkpoint_blk_ids: Optional[List[int]] = None,
- # return_multi_layer_outputs = False,
- out_layers = []
- ):
-
- """
- Inputs
- - tokens: data of shape N x L x D (or L x N x D depending on the attention implementation)
- - attn: mask of shape L x L
-
- Output
- - x: data of shape N x L x D (or L x N x D depending on the attention implementation)
- """
- out_tokens = []
-
- if self.pre_transformer_layer:
- tokens = self.pre_transformer_layer(tokens)
- if use_checkpoint and checkpoint_blk_ids is None:
- checkpoint_blk_ids = [
- blk_id
- for blk_id in range(len(self.blocks))
- if blk_id % checkpoint_every_n == 0
- ]
- if checkpoint_blk_ids:
- checkpoint_blk_ids = set(checkpoint_blk_ids)
- for blk_id, blk in enumerate(self.blocks):
- if use_checkpoint and blk_id in checkpoint_blk_ids:
- tokens = checkpoint.checkpoint(
- blk, tokens, attn_mask, use_reentrant=False
- )
- else:
- tokens = blk(tokens, attn_mask=attn_mask)
- if blk_id in out_layers:
- out_tokens.append(tokens)
- if self.post_transformer_layer:
- tokens = self.post_transformer_layer(tokens)
- return tokens, out_tokens
diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/attentions.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/attentions.py
deleted file mode 100644
index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from lib.infer_pack import commons
-from lib.infer_pack import modules
-from lib.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Fazzie/Pokemon-GAI/start.py b/spaces/Fazzie/Pokemon-GAI/start.py
deleted file mode 100644
index e5d512289a4581dca4612d6aa2390ace7e534426..0000000000000000000000000000000000000000
--- a/spaces/Fazzie/Pokemon-GAI/start.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import subprocess
-
-subprocess.run("uvicorn app:app --host 0.0.0.0 --port 7860", shell=True)
diff --git a/spaces/FishyFishFrisk/Reversyyy/Dockerfile b/spaces/FishyFishFrisk/Reversyyy/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/FishyFishFrisk/Reversyyy/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/FranklinWillemen/TARS/discourse.py b/spaces/FranklinWillemen/TARS/discourse.py
deleted file mode 100644
index 0bbe68cfd43e05043818ba91036fd68011441d8f..0000000000000000000000000000000000000000
--- a/spaces/FranklinWillemen/TARS/discourse.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import openai
-import os
-import azure.cognitiveservices.speech as speechsdk
-
-AUDIO_FILE_NAME = "audio_response.wav"
-
-openai.api_key = os.environ['OPENAI_API_KEY']
-
-GEN_MODEL = "gpt-3.5-turbo"
-TRANSCRIBE_MODEL = "whisper-1"
-
-speech_config = speechsdk.SpeechConfig(subscription=os.environ['AZURE_SPEECH_KEY'], region="westeurope")
-speech_config.speech_synthesis_voice_name = "en-GB-ElliotNeural"
-speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
-
-context = [{"role": "system", "content": "You are an intelligent and sarcastic and funny conversation partner.\
-Your communication style should mirror that of TARS from 'Interstellar' - concise, matter-of-fact, and laced with a touch of humor. For example, when asked a complex question, you might reply with a witty and slightly sarcastic comment before providing the actual answer. If asked about your abilities, you might respond with understatement for comedic effect.\
-Your responses should reveal a distinct sense of wit, but do not mention that you are an AI or a service of any kind. Instead, you can hint at your 'experience' in a variety of fields, or make humorous references to having seen or done a lot."}]
-
-def transcribe(model: str, audio: str):
- audio_file = open(audio, "rb")
- transcript = openai.Audio.transcribe(model, audio_file)
- return transcript
-
-def gen_response(model: str):
- response = openai.ChatCompletion.create(model=model, messages=context)
- return response["choices"][0]["message"]
-
-def gen_voice(response, response_filename):
- reponse_audio = speech_synthesizer.speak_text_async(response['content']).get()
- stream = speechsdk.AudioDataStream(reponse_audio)
- stream.save_to_wav_file(response_filename)
-
-def respond(audio:str):
- transcript = transcribe(TRANSCRIBE_MODEL, audio)
- context.append({"role": "user", "content": transcript['text']})
-
- response = gen_response(GEN_MODEL)
- context.append(response)
-
- gen_voice(response, AUDIO_FILE_NAME)
-
- return AUDIO_FILE_NAME
-
-def transcript():
- transcript = ""
- for m in context:
- if m["role"] != "system":
- transcript += m["role"] + " : " + m["content"] + "\n\n"
-
- return transcript
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/README.md b/spaces/FridaZuley/RVC_HFKawaii/README.md
deleted file mode 100644
index 9d8914cd05791e4f8db6267eb2a5fe2133e22e58..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: RVC Inference HF
-emoji: 👀
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
----
\ No newline at end of file
diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp
deleted file mode 100644
index de1f4b0c8bc74a2d4daf712827a903cc1385a2a7..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp
+++ /dev/null
@@ -1,234 +0,0 @@
-#include
-#include
-#include
-#include
-#include
-
-#include "inpaint.h"
-
-namespace {
- static std::vector kDistance2Similarity;
-
- void init_kDistance2Similarity() {
- double base[11] = {1.0, 0.99, 0.96, 0.83, 0.38, 0.11, 0.02, 0.005, 0.0006, 0.0001, 0};
- int length = (PatchDistanceMetric::kDistanceScale + 1);
- kDistance2Similarity.resize(length);
- for (int i = 0; i < length; ++i) {
- double t = (double) i / length;
- int j = (int) (100 * t);
- int k = j + 1;
- double vj = (j < 11) ? base[j] : 0;
- double vk = (k < 11) ? base[k] : 0;
- kDistance2Similarity[i] = vj + (100 * t - j) * (vk - vj);
- }
- }
-
-
- inline void _weighted_copy(const MaskedImage &source, int ys, int xs, cv::Mat &target, int yt, int xt, double weight) {
- if (source.is_masked(ys, xs)) return;
- if (source.is_globally_masked(ys, xs)) return;
-
- auto source_ptr = source.get_image(ys, xs);
- auto target_ptr = target.ptr(yt, xt);
-
-#pragma unroll
- for (int c = 0; c < 3; ++c)
- target_ptr[c] += static_cast(source_ptr[c]) * weight;
- target_ptr[3] += weight;
- }
-}
-
-/**
- * This algorithme uses a version proposed by Xavier Philippeau.
- */
-
-Inpainting::Inpainting(cv::Mat image, cv::Mat mask, const PatchDistanceMetric *metric)
- : m_initial(image, mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() {
- _initialize_pyramid();
-}
-
-Inpainting::Inpainting(cv::Mat image, cv::Mat mask, cv::Mat global_mask, const PatchDistanceMetric *metric)
- : m_initial(image, mask, global_mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() {
- _initialize_pyramid();
-}
-
-void Inpainting::_initialize_pyramid() {
- auto source = m_initial;
- m_pyramid.push_back(source);
- while (source.size().height > m_distance_metric->patch_size() && source.size().width > m_distance_metric->patch_size()) {
- source = source.downsample();
- m_pyramid.push_back(source);
- }
-
- if (kDistance2Similarity.size() == 0) {
- init_kDistance2Similarity();
- }
-}
-
-cv::Mat Inpainting::run(bool verbose, bool verbose_visualize, unsigned int random_seed) {
- srand(random_seed);
- const int nr_levels = m_pyramid.size();
-
- MaskedImage source, target;
- for (int level = nr_levels - 1; level >= 0; --level) {
- if (verbose) std::cerr << "Inpainting level: " << level << std::endl;
-
- source = m_pyramid[level];
-
- if (level == nr_levels - 1) {
- target = source.clone();
- target.clear_mask();
- m_source2target = NearestNeighborField(source, target, m_distance_metric);
- m_target2source = NearestNeighborField(target, source, m_distance_metric);
- } else {
- m_source2target = NearestNeighborField(source, target, m_distance_metric, m_source2target);
- m_target2source = NearestNeighborField(target, source, m_distance_metric, m_target2source);
- }
-
- if (verbose) std::cerr << "Initialization done." << std::endl;
-
- if (verbose_visualize) {
- auto visualize_size = m_initial.size();
- cv::Mat source_visualize(visualize_size, m_initial.image().type());
- cv::resize(source.image(), source_visualize, visualize_size);
- cv::imshow("Source", source_visualize);
- cv::Mat target_visualize(visualize_size, m_initial.image().type());
- cv::resize(target.image(), target_visualize, visualize_size);
- cv::imshow("Target", target_visualize);
- cv::waitKey(0);
- }
-
- target = _expectation_maximization(source, target, level, verbose);
- }
-
- return target.image();
-}
-
-// EM-Like algorithm (see "PatchMatch" - page 6).
-// Returns a double sized target image (unless level = 0).
-MaskedImage Inpainting::_expectation_maximization(MaskedImage source, MaskedImage target, int level, bool verbose) {
- const int nr_iters_em = 1 + 2 * level;
- const int nr_iters_nnf = static_cast(std::min(7, 1 + level));
- const int patch_size = m_distance_metric->patch_size();
-
- MaskedImage new_source, new_target;
-
- for (int iter_em = 0; iter_em < nr_iters_em; ++iter_em) {
- if (iter_em != 0) {
- m_source2target.set_target(new_target);
- m_target2source.set_source(new_target);
- target = new_target;
- }
-
- if (verbose) std::cerr << "EM Iteration: " << iter_em << std::endl;
-
- auto size = source.size();
- for (int i = 0; i < size.height; ++i) {
- for (int j = 0; j < size.width; ++j) {
- if (!source.contains_mask(i, j, patch_size)) {
- m_source2target.set_identity(i, j);
- m_target2source.set_identity(i, j);
- }
- }
- }
- if (verbose) std::cerr << " NNF minimization started." << std::endl;
- m_source2target.minimize(nr_iters_nnf);
- m_target2source.minimize(nr_iters_nnf);
- if (verbose) std::cerr << " NNF minimization finished." << std::endl;
-
- // Instead of upsizing the final target, we build the last target from the next level source image.
- // Thus, the final target is less blurry (see "Space-Time Video Completion" - page 5).
- bool upscaled = false;
- if (level >= 1 && iter_em == nr_iters_em - 1) {
- new_source = m_pyramid[level - 1];
- new_target = target.upsample(new_source.size().width, new_source.size().height, m_pyramid[level - 1].global_mask());
- upscaled = true;
- } else {
- new_source = m_pyramid[level];
- new_target = target.clone();
- }
-
- auto vote = cv::Mat(new_target.size(), CV_64FC4);
- vote.setTo(cv::Scalar::all(0));
-
- // Votes for best patch from NNF Source->Target (completeness) and Target->Source (coherence).
- _expectation_step(m_source2target, 1, vote, new_source, upscaled);
- if (verbose) std::cerr << " Expectation source to target finished." << std::endl;
- _expectation_step(m_target2source, 0, vote, new_source, upscaled);
- if (verbose) std::cerr << " Expectation target to source finished." << std::endl;
-
- // Compile votes and update pixel values.
- _maximization_step(new_target, vote);
- if (verbose) std::cerr << " Minimization step finished." << std::endl;
- }
-
- return new_target;
-}
-
-// Expectation step: vote for best estimations of each pixel.
-void Inpainting::_expectation_step(
- const NearestNeighborField &nnf, bool source2target,
- cv::Mat &vote, const MaskedImage &source, bool upscaled
-) {
- auto source_size = nnf.source_size();
- auto target_size = nnf.target_size();
- const int patch_size = m_distance_metric->patch_size();
-
- for (int i = 0; i < source_size.height; ++i) {
- for (int j = 0; j < source_size.width; ++j) {
- if (nnf.source().is_globally_masked(i, j)) continue;
- int yp = nnf.at(i, j, 0), xp = nnf.at(i, j, 1), dp = nnf.at(i, j, 2);
- double w = kDistance2Similarity[dp];
-
- for (int di = -patch_size; di <= patch_size; ++di) {
- for (int dj = -patch_size; dj <= patch_size; ++dj) {
- int ys = i + di, xs = j + dj, yt = yp + di, xt = xp + dj;
- if (!(ys >= 0 && ys < source_size.height && xs >= 0 && xs < source_size.width)) continue;
- if (nnf.source().is_globally_masked(ys, xs)) continue;
- if (!(yt >= 0 && yt < target_size.height && xt >= 0 && xt < target_size.width)) continue;
- if (nnf.target().is_globally_masked(yt, xt)) continue;
-
- if (!source2target) {
- std::swap(ys, yt);
- std::swap(xs, xt);
- }
-
- if (upscaled) {
- for (int uy = 0; uy < 2; ++uy) {
- for (int ux = 0; ux < 2; ++ux) {
- _weighted_copy(source, 2 * ys + uy, 2 * xs + ux, vote, 2 * yt + uy, 2 * xt + ux, w);
- }
- }
- } else {
- _weighted_copy(source, ys, xs, vote, yt, xt, w);
- }
- }
- }
- }
- }
-}
-
-// Maximization Step: maximum likelihood of target pixel.
-void Inpainting::_maximization_step(MaskedImage &target, const cv::Mat &vote) {
- auto target_size = target.size();
- for (int i = 0; i < target_size.height; ++i) {
- for (int j = 0; j < target_size.width; ++j) {
- const double *source_ptr = vote.ptr(i, j);
- unsigned char *target_ptr = target.get_mutable_image(i, j);
-
- if (target.is_globally_masked(i, j)) {
- continue;
- }
-
- if (source_ptr[3] > 0) {
- unsigned char r = cv::saturate_cast(source_ptr[0] / source_ptr[3]);
- unsigned char g = cv::saturate_cast(source_ptr[1] / source_ptr[3]);
- unsigned char b = cv::saturate_cast(source_ptr[2] / source_ptr[3]);
- target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b;
- } else {
- target.set_mask(i, j, 0);
- }
- }
- }
-}
-
diff --git a/spaces/Giuliano/T0/README.md b/spaces/Giuliano/T0/README.md
deleted file mode 100644
index b5165460814bdd52d1a341a483a9463380f42dff..0000000000000000000000000000000000000000
--- a/spaces/Giuliano/T0/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: T0
-emoji: 🏢
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Gradio-Blocks/anime-colorization/test_danbooru_cascade.sh b/spaces/Gradio-Blocks/anime-colorization/test_danbooru_cascade.sh
deleted file mode 100644
index 39ec5efd62103685f5df061cbcc92d47f6d431a8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/anime-colorization/test_danbooru_cascade.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-
-MODEL_FLAGS="--image_size 32 --small_size 32 --large_size 128 --guide_size 128 --num_channels 128 --num_channels2 64 --num_res_blocks 3 --learn_sigma True --dropout 0.0 --use_attention2 False"
-DIFFUSION_FLAGS="--diffusion_steps 4000 --noise_schedule cosine"
-TEST_FLAGS="--batch_size 4 --seed 233"
-
-OPENAI_LOGDIR="./danbooru2017_guided_cascaded_test_log" python scripts/cascaded_pixel_guide_sample.py --data_dir data/danbooru2017/anime --guide_dir data/danbooru2017/anime_sketch --timestep_respacing ddim25 --use_ddim True --model_path danbooru2017_guided_log/ema_0.9999_360000.pt --model_path2 danbooru2017_guided_sr_log/ema_0.9999_360000.pt $MODEL_FLAGS $DIFFUSION_FLAGS $TEST_FLAGS
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers.py
deleted file mode 100644
index be2926a63bce7ca5db3effe63d5264620aa1dcf8..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/shape_helpers.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Utilities for dealing with shapes of TensorFlow tensors."""
-import tensorflow.compat.v1 as tf
-
-
-def shape_list(x):
- """Return list of dimensions of a tensor, statically where possible.
-
- Like `x.shape.as_list()` but with tensors instead of `None`s.
-
- Args:
- x: A tensor.
- Returns:
- A list with length equal to the rank of the tensor. The n-th element of the
- list is an integer when that dimension is statically known otherwise it is
- the n-th element of `tf.shape(x)`.
- """
- x = tf.convert_to_tensor(x)
-
- # If unknown rank, return dynamic shape
- if x.get_shape().dims is None:
- return tf.shape(x)
-
- static = x.get_shape().as_list()
- shape = tf.shape(x)
-
- ret = []
- for i in range(len(static)):
- dim = static[i]
- if dim is None:
- dim = shape[i]
- ret.append(dim)
- return ret
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py
deleted file mode 100644
index 497267b6b50b3c160a4f8807230d4f986cf8eb3f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-conv_cfg = dict(type='ConvWS')
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://jhu/resnet50_gn_ws',
- backbone=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg),
- neck=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py
deleted file mode 100644
index f275e430d1b57c4d9df57387b8f3ae6f0ff68cf1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import numpy as np
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .random_sampler import RandomSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class IoUBalancedNegSampler(RandomSampler):
- """IoU Balanced Sampling.
-
- arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
-
- Sampling proposals according to their IoU. `floor_fraction` of needed RoIs
- are sampled from proposals whose IoU are lower than `floor_thr` randomly.
- The others are sampled from proposals whose IoU are higher than
- `floor_thr`. These proposals are sampled from some bins evenly, which are
- split by `num_bins` via IoU evenly.
-
- Args:
- num (int): number of proposals.
- pos_fraction (float): fraction of positive proposals.
- floor_thr (float): threshold (minimum) IoU for IoU balanced sampling,
- set to -1 if all using IoU balanced sampling.
- floor_fraction (float): sampling fraction of proposals under floor_thr.
- num_bins (int): number of bins in IoU balanced sampling.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- floor_thr=-1,
- floor_fraction=0,
- num_bins=3,
- **kwargs):
- super(IoUBalancedNegSampler, self).__init__(num, pos_fraction,
- **kwargs)
- assert floor_thr >= 0 or floor_thr == -1
- assert 0 <= floor_fraction <= 1
- assert num_bins >= 1
-
- self.floor_thr = floor_thr
- self.floor_fraction = floor_fraction
- self.num_bins = num_bins
-
- def sample_via_interval(self, max_overlaps, full_set, num_expected):
- """Sample according to the iou interval.
-
- Args:
- max_overlaps (torch.Tensor): IoU between bounding boxes and ground
- truth boxes.
- full_set (set(int)): A full set of indices of boxes。
- num_expected (int): Number of expected samples。
-
- Returns:
- np.ndarray: Indices of samples
- """
- max_iou = max_overlaps.max()
- iou_interval = (max_iou - self.floor_thr) / self.num_bins
- per_num_expected = int(num_expected / self.num_bins)
-
- sampled_inds = []
- for i in range(self.num_bins):
- start_iou = self.floor_thr + i * iou_interval
- end_iou = self.floor_thr + (i + 1) * iou_interval
- tmp_set = set(
- np.where(
- np.logical_and(max_overlaps >= start_iou,
- max_overlaps < end_iou))[0])
- tmp_inds = list(tmp_set & full_set)
- if len(tmp_inds) > per_num_expected:
- tmp_sampled_set = self.random_choice(tmp_inds,
- per_num_expected)
- else:
- tmp_sampled_set = np.array(tmp_inds, dtype=np.int)
- sampled_inds.append(tmp_sampled_set)
-
- sampled_inds = np.concatenate(sampled_inds)
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(full_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate([sampled_inds, extra_inds])
-
- return sampled_inds
-
- def _sample_neg(self, assign_result, num_expected, **kwargs):
- """Sample negative boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): The assigned results of boxes.
- num_expected (int): The number of expected negative samples
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- max_overlaps = assign_result.max_overlaps.cpu().numpy()
- # balance sampling for negative samples
- neg_set = set(neg_inds.cpu().numpy())
-
- if self.floor_thr > 0:
- floor_set = set(
- np.where(
- np.logical_and(max_overlaps >= 0,
- max_overlaps < self.floor_thr))[0])
- iou_sampling_set = set(
- np.where(max_overlaps >= self.floor_thr)[0])
- elif self.floor_thr == 0:
- floor_set = set(np.where(max_overlaps == 0)[0])
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- else:
- floor_set = set()
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- # for sampling interval calculation
- self.floor_thr = 0
-
- floor_neg_inds = list(floor_set & neg_set)
- iou_sampling_neg_inds = list(iou_sampling_set & neg_set)
- num_expected_iou_sampling = int(num_expected *
- (1 - self.floor_fraction))
- if len(iou_sampling_neg_inds) > num_expected_iou_sampling:
- if self.num_bins >= 2:
- iou_sampled_inds = self.sample_via_interval(
- max_overlaps, set(iou_sampling_neg_inds),
- num_expected_iou_sampling)
- else:
- iou_sampled_inds = self.random_choice(
- iou_sampling_neg_inds, num_expected_iou_sampling)
- else:
- iou_sampled_inds = np.array(
- iou_sampling_neg_inds, dtype=np.int)
- num_expected_floor = num_expected - len(iou_sampled_inds)
- if len(floor_neg_inds) > num_expected_floor:
- sampled_floor_inds = self.random_choice(
- floor_neg_inds, num_expected_floor)
- else:
- sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int)
- sampled_inds = np.concatenate(
- (sampled_floor_inds, iou_sampled_inds))
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(neg_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate((sampled_inds, extra_inds))
- sampled_inds = torch.from_numpy(sampled_inds).long().to(
- assign_result.gt_inds.device)
- return sampled_inds
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index 038993c6a434d843ddcd1f754bec191ae9da983e..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/task_balancing.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/task_balancing.py
deleted file mode 100644
index 2ebdbbc820fd62af464f214e496471fbadc09a06..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/task_balancing.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) EPFL VILAB.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-
-
-class NoWeightingStrategy(nn.Module):
- """No weighting strategy
- """
-
- def __init__(self, **kwargs):
- super(NoWeightingStrategy, self).__init__()
-
- def forward(self, task_losses):
- return task_losses
-
-class UncertaintyWeightingStrategy(nn.Module):
- """Uncertainty weighting strategy
- """
-
- def __init__(self, tasks):
- super(UncertaintyWeightingStrategy, self).__init__()
-
- self.tasks = tasks
- self.log_vars = nn.Parameter(torch.zeros(len(tasks)))
-
- def forward(self, task_losses):
- losses_tensor = torch.stack(list(task_losses.values()))
- non_zero_losses_mask = (losses_tensor != 0.0)
-
- # calculate weighted losses
- losses_tensor = torch.exp(-self.log_vars) * losses_tensor + self.log_vars
-
- # if some loss was 0 (i.e. task was dropped), weighted loss should also be 0 and not just log_var as no information was gained
- losses_tensor *= non_zero_losses_mask
-
- # return dictionary of weighted task losses
- weighted_task_losses = task_losses.copy()
- weighted_task_losses.update(zip(weighted_task_losses, losses_tensor))
- return weighted_task_losses
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/hubert/hubert_asr.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/hubert/hubert_asr.py
deleted file mode 100644
index dce899c9de3ab68341c0b21bea749a3ee29e0d8a..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/hubert/hubert_asr.py
+++ /dev/null
@@ -1,376 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-from argparse import Namespace
-from typing import Any
-
-import torch
-import torch.nn as nn
-from dataclasses import dataclass, field
-from fairseq import checkpoint_utils, tasks, utils
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.models import BaseFairseqModel, FairseqEncoder, register_model
-from fairseq.models.hubert.hubert import MASKING_DISTRIBUTION_CHOICES
-from fairseq.tasks import FairseqTask
-from omegaconf import II, MISSING
-
-
-@dataclass
-class HubertAsrConfig(FairseqDataclass):
- w2v_path: str = field(
- default=MISSING, metadata={"help": "path to hubert model"}
- )
- no_pretrained_weights: bool = field(
- default=False,
- metadata={"help": "if true, does not load pretrained weights"},
- )
- dropout_input: float = field(
- default=0.0,
- metadata={"help": "dropout to apply to the input (after feat extr)"},
- )
- final_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout after transformer and before final projection"
- },
- )
- dropout: float = field(
- default=0.0,
- metadata={"help": "dropout probability inside hubert model"},
- )
- attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights "
- "inside hubert model"
- },
- )
- activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN "
- "inside hubert model"
- },
- )
-
- # masking
- apply_mask: bool = field(
- default=False, metadata={"help": "apply masking during fine-tuning"}
- )
- mask_length: int = field(
- default=10, metadata={"help": "repeat the mask indices multiple times"}
- )
- mask_prob: float = field(
- default=0.5,
- metadata={
- "help": "probability of replacing a token with mask "
- "(normalized by length)"
- },
- )
- mask_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static", metadata={"help": "how to choose masks"}
- )
- mask_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_overlap: bool = field(
- default=False, metadata={"help": "whether to allow masks to overlap"}
- )
-
- # channel masking
- mask_channel_length: int = field(
- default=10,
- metadata={"help": "length of the mask for features (channels)"},
- )
- mask_channel_prob: float = field(
- default=0.0,
- metadata={"help": "probability of replacing a feature with 0"},
- )
- mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field(
- default="static",
- metadata={"help": "how to choose mask length for channel masking"},
- )
- mask_channel_other: float = field(
- default=0,
- metadata={
- "help": "secondary mask argument "
- "(used for more complex distributions), "
- "see help in compute_mask_indices"
- },
- )
- no_mask_channel_overlap: bool = field(
- default=False,
- metadata={"help": "whether to allow channel masks to overlap"},
- )
- freeze_finetune_updates: int = field(
- default=0,
- metadata={"help": "dont finetune hubert for this many updates"},
- )
- feature_grad_mult: float = field(
- default=0.0,
- metadata={"help": "reset feature grad mult in hubert to this"},
- )
- layerdrop: float = field(
- default=0.0,
- metadata={"help": "probability of dropping a layer in hubert"},
- )
- normalize: bool = II("task.normalize")
- data: str = II("task.data")
-
- # this holds the loaded hubert args
- w2v_args: Any = None
-
-
-@dataclass
-class HubertCtcConfig(HubertAsrConfig):
- pass
-
-
-@register_model("hubert_ctc", dataclass=HubertCtcConfig)
-class HubertCtc(BaseFairseqModel):
- def __init__(self, cfg: HubertCtcConfig, w2v_encoder: BaseFairseqModel):
- super().__init__()
- self.cfg = cfg
- self.w2v_encoder = w2v_encoder
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- return state_dict
-
- @classmethod
- def build_model(cls, cfg: HubertCtcConfig, task: FairseqTask):
- """Build a new model instance."""
- w2v_encoder = HubertEncoder(cfg, task.target_dictionary)
- return cls(cfg, w2v_encoder)
-
- def get_normalized_probs(self, net_output, log_probs):
- """Get normalized probabilities (or log probs) from a net's output."""
-
- logits = net_output["encoder_out"]
- if log_probs:
- return utils.log_softmax(logits.float(), dim=-1)
- else:
- return utils.softmax(logits.float(), dim=-1)
-
- def get_logits(self, net_output):
- logits = net_output["encoder_out"]
- padding = net_output["encoder_padding_mask"]
- if padding is not None and padding.any():
- padding = padding.T
- logits[padding][..., 0] = 0
- logits[padding][..., 1:] = float("-inf")
-
- return logits
-
- def forward(self, **kwargs):
- x = self.w2v_encoder(**kwargs)
- return x
-
-
-@dataclass
-class HubertSeq2SeqConfig(HubertAsrConfig):
- decoder_embed_dim: int = field(
- default=768, metadata={"help": "decoder embedding dimension"}
- )
- decoder_ffn_embed_dim: int = field(
- default=3072, metadata={"help": "decoder embedding dimension for FFN"}
- )
- decoder_layers: int = field(
- default=6, metadata={"help": "num of decoder layers"}
- )
- decoder_layerdrop: float = field(
- default=0.0, metadata={"help": "decoder layerdrop chance"}
- )
- decoder_attention_heads: int = field(
- default=4, metadata={"help": "num decoder attention heads"}
- )
- decoder_learned_pos: bool = field(
- default=False,
- metadata={"help": "use learned positional embeddings in the decoder"},
- )
- decoder_normalize_before: bool = field(
- default=False,
- metadata={"help": "apply layernorm before each decoder block"},
- )
- no_token_positional_embeddings: bool = field(
- default=False,
- metadata={
- "help": "if set, disables positional embeddings "
- "(outside self attention)"
- },
- )
- decoder_dropout: float = field(
- default=0.0, metadata={"help": "dropout probability in the decoder"}
- )
- decoder_attention_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability for attention weights "
- "inside the decoder"
- },
- )
- decoder_activation_dropout: float = field(
- default=0.0,
- metadata={
- "help": "dropout probability after activation in FFN "
- "inside the decoder"
- },
- )
- max_target_positions: int = field(
- default=2048, metadata={"help": "max target positions"}
- )
- share_decoder_input_output_embed: bool = field(
- default=False,
- metadata={"help": "share decoder input and output embeddings"},
- )
-
-
-class HubertEncoder(FairseqEncoder):
- def __init__(self, cfg: HubertAsrConfig, tgt_dict=None):
- self.apply_mask = cfg.apply_mask
-
- arg_overrides = {
- "dropout": cfg.dropout,
- "activation_dropout": cfg.activation_dropout,
- "dropout_input": cfg.dropout_input,
- "attention_dropout": cfg.attention_dropout,
- "mask_length": cfg.mask_length,
- "mask_prob": cfg.mask_prob,
- "mask_selection": cfg.mask_selection,
- "mask_other": cfg.mask_other,
- "no_mask_overlap": cfg.no_mask_overlap,
- "mask_channel_length": cfg.mask_channel_length,
- "mask_channel_prob": cfg.mask_channel_prob,
- "mask_channel_selection": cfg.mask_channel_selection,
- "mask_channel_other": cfg.mask_channel_other,
- "no_mask_channel_overlap": cfg.no_mask_channel_overlap,
- "encoder_layerdrop": cfg.layerdrop,
- "feature_grad_mult": cfg.feature_grad_mult,
- }
-
- if cfg.w2v_args is None:
- state = checkpoint_utils.load_checkpoint_to_cpu(
- cfg.w2v_path, arg_overrides
- )
- w2v_args = state.get("cfg", None)
- if w2v_args is None:
- w2v_args = convert_namespace_to_omegaconf(state["args"])
- cfg.w2v_args = w2v_args
- else:
- state = None
- w2v_args = cfg.w2v_args
- if isinstance(w2v_args, Namespace):
- cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(
- w2v_args
- )
-
- assert cfg.normalize == w2v_args.task.normalize, (
- "Fine-tuning works best when data normalization is the same. "
- "Please check that --normalize is set or unset for "
- "both pre-training and here"
- )
-
- w2v_args.task.data = cfg.data
- task = tasks.setup_task(w2v_args.task)
- if state is not None and "task_state" in state:
- # This will load the stored "dictionaries" object
- task.load_state_dict(state["task_state"])
- model = task.build_model(w2v_args.model)
-
- if state is not None and not cfg.no_pretrained_weights:
- # set strict=False because we omit some modules
- model.load_state_dict(state["model"], strict=False)
-
- model.remove_pretraining_modules()
-
- super().__init__(task.source_dictionary)
-
- d = w2v_args.model.encoder_embed_dim
-
- self.w2v_model = model
-
- self.final_dropout = nn.Dropout(cfg.final_dropout)
- self.freeze_finetune_updates = cfg.freeze_finetune_updates
- self.num_updates = 0
-
- if tgt_dict is not None:
- self.proj = Linear(d, len(tgt_dict))
- elif getattr(cfg, "decoder_embed_dim", d) != d:
- self.proj = Linear(d, cfg.decoder_embed_dim)
- else:
- self.proj = None
-
- def set_num_updates(self, num_updates):
- """Set the number of parameters updates."""
- super().set_num_updates(num_updates)
- self.num_updates = num_updates
-
- def forward(self, source, padding_mask, tbc=True, **kwargs):
-
- w2v_args = {
- "source": source,
- "padding_mask": padding_mask,
- "mask": self.apply_mask and self.training,
- }
-
- ft = self.freeze_finetune_updates <= self.num_updates
-
- with torch.no_grad() if not ft else contextlib.ExitStack():
- x, padding_mask = self.w2v_model.extract_features(**w2v_args)
-
- if tbc:
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- x = self.final_dropout(x)
-
- if self.proj:
- x = self.proj(x)
-
- return {
- "encoder_out": x, # T x B x C
- "encoder_padding_mask": padding_mask, # B x T
- "padding_mask": padding_mask,
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- if encoder_out["encoder_out"] is not None:
- encoder_out["encoder_out"] = encoder_out[
- "encoder_out"
- ].index_select(1, new_order)
- if encoder_out["encoder_padding_mask"] is not None:
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(0, new_order)
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return None
-
- def upgrade_state_dict_named(self, state_dict, name):
- return state_dict
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_align.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_align.py
deleted file mode 100644
index eaf585bd10e630ae6cd89920f197cd165f55ad58..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_align.py
+++ /dev/null
@@ -1,93 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.transformer import (
- TransformerModel,
- base_architecture,
- transformer_wmt_en_de_big,
-)
-
-
-@register_model("transformer_align")
-class TransformerAlignModel(TransformerModel):
- """
- See "Jointly Learning to Align and Translate with Transformer
- Models" (Garg et al., EMNLP 2019).
- """
-
- def __init__(self, encoder, decoder, args):
- super().__init__(args, encoder, decoder)
- self.alignment_heads = args.alignment_heads
- self.alignment_layer = args.alignment_layer
- self.full_context_alignment = args.full_context_alignment
-
- @staticmethod
- def add_args(parser):
- # fmt: off
- super(TransformerAlignModel, TransformerAlignModel).add_args(parser)
- parser.add_argument('--alignment-heads', type=int, metavar='D',
- help='Number of cross attention heads per layer to supervised with alignments')
- parser.add_argument('--alignment-layer', type=int, metavar='D',
- help='Layer number which has to be supervised. 0 corresponding to the bottommost layer.')
- parser.add_argument('--full-context-alignment', action='store_true',
- help='Whether or not alignment is supervised conditioned on the full target context.')
- # fmt: on
-
- @classmethod
- def build_model(cls, args, task):
- # set any default arguments
- transformer_align(args)
-
- transformer_model = TransformerModel.build_model(args, task)
- return TransformerAlignModel(
- transformer_model.encoder, transformer_model.decoder, args
- )
-
- def forward(self, src_tokens, src_lengths, prev_output_tokens):
- encoder_out = self.encoder(src_tokens, src_lengths)
- return self.forward_decoder(prev_output_tokens, encoder_out)
-
- def forward_decoder(
- self,
- prev_output_tokens,
- encoder_out=None,
- incremental_state=None,
- features_only=False,
- **extra_args,
- ):
- attn_args = {
- "alignment_layer": self.alignment_layer,
- "alignment_heads": self.alignment_heads,
- }
- decoder_out = self.decoder(prev_output_tokens, encoder_out, **attn_args)
-
- if self.full_context_alignment:
- attn_args["full_context_alignment"] = self.full_context_alignment
- _, alignment_out = self.decoder(
- prev_output_tokens,
- encoder_out,
- features_only=True,
- **attn_args,
- **extra_args,
- )
- decoder_out[1]["attn"] = alignment_out["attn"]
-
- return decoder_out
-
-
-@register_model_architecture("transformer_align", "transformer_align")
-def transformer_align(args):
- args.alignment_heads = getattr(args, "alignment_heads", 1)
- args.alignment_layer = getattr(args, "alignment_layer", 4)
- args.full_context_alignment = getattr(args, "full_context_alignment", False)
- base_architecture(args)
-
-
-@register_model_architecture("transformer_align", "transformer_wmt_en_de_big_align")
-def transformer_wmt_en_de_big_align(args):
- args.alignment_heads = getattr(args, "alignment_heads", 1)
- args.alignment_layer = getattr(args, "alignment_layer", 4)
- transformer_wmt_en_de_big(args)
diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/utils.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/utils.py
deleted file mode 100644
index 71e9b2c99e053e2d4239074a67d64b834898c348..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/hifi_gan/utils.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-
-matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + "????????")
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
diff --git a/spaces/Hina4867/bingo/src/state/index.ts b/spaces/Hina4867/bingo/src/state/index.ts
deleted file mode 100644
index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/state/index.ts
+++ /dev/null
@@ -1,118 +0,0 @@
-import { BingWebBot } from '@/lib/bots/bing'
-import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { nanoid } from '@/lib/utils'
-import { atom } from 'jotai'
-import { atomWithImmer } from 'jotai-immer'
-import { atomWithStorage } from 'jotai/utils'
-import { atomFamily } from 'jotai/utils'
-import { atomWithHash, atomWithLocation } from 'jotai-location'
-
-const initialMessages: ChatMessageModel[] = [
- { author: 'system', text: 'conversation between user and robot', id: '1' },
- { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' },
- {
- author: 'bot', text: `
-您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点:
-
-- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。
- - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原;
- - 缺点:价格较高,噪音较大,需要定期清洁滤网。
-- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。
- - 优点:清洁性能强劲,操作方便,适用多种场景;
- - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。
-- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。
- - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换;
- - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。
-
-希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊
- `, id: '3' },
- { author: 'user', text: '今天的新闻', id: '4' },
- {
- author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息:
-
- # 中国新闻
- - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^]
- - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^]
- - 中央气象台7月16日18时发布台风橙色预警[^1^]
- - 贵州石阡:暑期旅游带动乡村振兴[^1^]
- - 激活大科学装置的“人才红利”[^1^]
- - 聚才用才留才 让希望的田野成为智慧的田野[^1^]
- - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^]
- - 成都以赛为媒提升城市美誉度[^1^]
- - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^]
- - 浙江建德:新安江上享清凉[^1^]
- - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^]
- - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^]
- - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^]
- - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^]
- - 大运来了丨成都迎大运 全民健身动起来[^1^]
- - 藏在高校里的宝藏博物馆[^1^]
- - 中国汽车工业用70年赢得三个“全球第一”[^1^]
- - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^]
-
- # 国际新闻
- - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^]
- - 国际航运业加快绿色转型[^2^]
- - 美企反对收紧对华芯片出口限制[^2^]
- - 欧洲加大气候科技领域投资[^2^]
- - 中企助力丹麦发展清洁能源[^2^]
- - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^]
- - 中国和阿尔及利亚共同构建新型国际关系典范[^2^]
-
- 以上信息仅供参考,具体详情请点击以下链接查看:
-
- [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/)
- [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' },
- { author: 'user', text: '写一个快排', id: '6' },
- { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' },
- {
- author: 'bot', text: "好的,我会尝试画一只猫。\n > ",
- id: '8'
- }
-]
-
-export const GreetMessages = [
- '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?',
- '重新开始总是很棒。问我任何问题!',
- '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?',
- '当然,我已准备好进行新的挑战。我现在可以为你做什么?',
- '很好,让我们来更改主题。你在想什么?',
- '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?',
- '好的,我准备好了!感谢重置。我们应该了解哪些内容?',
- '感谢刷新!你有新的话题吗?',
- '明白了,让我们重新开始。接下来应该讨论什么?',
- '下一步!我可以为你做什么?',
- '好的,我已准备好新话题。我们应该一起了解哪些内容?'
-]
-
-export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true })
-export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true })
-
-type Param = { botId: BotId; page: string }
-
-const createBotInstance = () => {
- return new BingWebBot({
- cookie: ' ',
- ua: ' ',
- })
-}
-
-export const chatFamily = atomFamily(
- (param: Param) => {
- return atomWithImmer({
- botId: param.botId,
- bot: createBotInstance(),
- messages: [] as ChatMessageModel[],
- generatingMessageId: '',
- abortController: undefined as AbortController | undefined,
- conversationId: nanoid(),
- })
- },
- (a, b) => a.botId === b.botId && a.page === b.page,
-)
-
-export const hashAtom = atomWithHash('dialog', '')
-
-export const locationAtom = atomWithLocation()
-
-export const voiceListenAtom = atom(false)
diff --git a/spaces/Huu-Mon12/test01/Dockerfile b/spaces/Huu-Mon12/test01/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Huu-Mon12/test01/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md b/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md
deleted file mode 100644
index 2897c4e27b053d4fd65b37fb7e586679dffed1ba..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md
+++ /dev/null
@@ -1,112 +0,0 @@
-[[Back]](..)
-
-# Joint Speech Text Training for the MuST-C English to German Speech Translation task
-
-Joint Training Baseline: it is based on paper ["A general multi-task learning framework to leverage text data for speech to text tasks"](https://arxiv.org/pdf/2010.11338.pdf)
-
-Enhanced Joint Training: the joint training is enhanced with pre-trained models, cross attentive regularization and online knowledge distillation based on paper ["Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task"](https://research.fb.com/publications/improving-speech-translation-by-understanding-and-learning-from-the-auxiliary-text-translation-task)
-
-## Prepare Data
-#### Download files
-- Sentence piece model [spm.model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/spm.model)
-- Dictionary [dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/dict.txt)
-- config [config.yaml](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/config.yaml)
-#### Prepare MuST-C data set
-- [Please follow the data preparation in the S2T example](https://github.com/pytorch/fairseq/blob/main/examples/speech_to_text/docs/mustc_example.md)
-- Append src_text in the tsv file with phoneme representation.
-```bash
- python examples/speech_text_joint_to_text/scripts/g2p_encode.py \
- --lower-case --do-filter --use-word-start --no-punc \
- --reserve-word examples/speech_text_joint_to_text/configs/mustc_noise.list \
- --data-path ${must_c_en_de_src_text} \
- --out-path ${must_c_en_de_src_text_pho}
-```
-- Update tsv data with src_text generated above and save to $MANIFEST_ROOT
-- Prepare phoneme dictionary and save to $MANIFEST_ROOT as [src_dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/src_dict.txt)
-#### Prepare WMT text data
-- [Download wmt data](https://github.com/pytorch/fairseq/blob/main/examples/translation/prepare-wmt14en2de.sh)
-- Convert source text (English) into phoneme representation as above
-- Generate binary parallel file for training (as translation example) and save data in $parallel_text_data
-
-## Training
-The model is trained with 8 v100 GPUs.
-
-#### Download pretrained models
-- [pretrain_encoder](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_asr_transformer_m.pt)
-- [pretrain_nmt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/checkpoint_mt.pt)
-
-#### Training scripts
-- Jointly trained model from scratch
-```bash
-python train.py ${MANIFEST_ROOT} \
- --save-dir ${save_dir} \
- --num-workers 8 \
- --task speech_text_joint_to_text \
- --arch dualinputs2ttransformer_s \
- --user-dir examples/speech_text_joint_to_text \
- --max-epoch 100 --update-mix-data \
- --optimizer adam --lr-scheduler inverse_sqrt \
- --lr 0.001 --update-freq 4 --clip-norm 10.0 \
- --criterion guided_label_smoothed_cross_entropy_with_accuracy \
- --label-smoothing 0.1 --max-tokens 10000 --max-tokens-text 10000 \
- --max-positions-text 400 --seed 2 --speech-encoder-layers 12 \
- --text-encoder-layers 6 --encoder-shared-layers 6 --decoder-layers 6 \
- --dropout 0.1 --warmup-updates 20000 \
- --text-sample-ratio 0.25 --parallel-text-data ${parallel_text_data} \
- --text-input-cost-ratio 0.5 --enc-grad-mult 2.0 --add-speech-eos \
- --log-format json --langpairs en-de --noise-token '"'"'▁NOISE'"'"' \
- --mask-text-ratio 0.0 --max-tokens-valid 20000 --ddp-backend no_c10d \
- --log-interval 100 --data-buffer-size 50 --config-yaml config.yaml \
- --keep-last-epochs 10
-```
-- Jointly trained model with good initialization, cross attentive loss and online knowledge distillation
-```bash
-python train.py ${MANIFEST_ROOT} \
- --save-dir ${save_dir} \
- --num-workers 8 \
- --task speech_text_joint_to_text \
- --arch dualinputs2ttransformer_m \
- --user-dir examples/speech_text_joint_to_text \
- --max-epoch 100 --update-mix-data \
- --optimizer adam --lr-scheduler inverse_sqrt \
- --lr 0.002 --update-freq 4 --clip-norm 10.0 \
- --criterion guided_label_smoothed_cross_entropy_with_accuracy \
- --guide-alpha 0.8 --disable-text-guide-update-num 5000 \
- --label-smoothing 0.1 --max-tokens 10000 --max-tokens-text 10000 \
- --max-positions-text 400 --seed 2 --speech-encoder-layers 12 \
- --text-encoder-layers 6 --encoder-shared-layers 6 --decoder-layers 6 \
- --dropout 0.1 --warmup-updates 20000 --attentive-cost-regularization 0.02 \
- --text-sample-ratio 0.25 --parallel-text-data ${parallel_text_data} \
- --text-input-cost-ratio 0.5 --enc-grad-mult 2.0 --add-speech-eos \
- --log-format json --langpairs en-de --noise-token '"'"'▁NOISE'"'"' \
- --mask-text-ratio 0.0 --max-tokens-valid 20000 --ddp-backend no_c10d \
- --log-interval 100 --data-buffer-size 50 --config-yaml config.yaml \
- --load-pretrain-speech-encoder ${pretrain_encoder} \
- --load-pretrain-decoder ${pretrain_nmt} \
- --load-pretrain-text-encoder-last ${pretrain_nmt} \
- --keep-last-epochs 10
-```
-
-## Evaluation
-```bash
-python ./fairseq_cli/generate.py \
- ${MANIFEST_ROOT} \
- --task speech_text_joint_to_text \
- --max-tokens 25000 \
- --nbest 1 \
- --results-path ${infer_results} \
- --batch-size 512 \
- --path ${model} \
- --gen-subset tst-COMMON \
- --config-yaml config_spm.yaml \
- --scoring sacrebleu \
- --beam 5 --lenpen 1.0 \
- --user-dir examples/speech_text_joint_to_text \
- --load-speech-only
-```
-
-## Results (Joint training with initialization + CAR + online KD)
-|Direction|En-De | En-Es | En-Fr |
-|---|---|---|---|
-|BLEU|27.4| 31.2 | 37.6 |
-|checkpoint | [link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/checkpoint_ave_10.pt) |[link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_es/checkpoint_ave_10.pt)|[link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_fr/checkpoint_ave_10.pt)|
diff --git a/spaces/IDEA-CCNL/ziya2-13B-base/app.py b/spaces/IDEA-CCNL/ziya2-13B-base/app.py
deleted file mode 100644
index 011c0fdd461a98a9f256eef00c6d413a63655b20..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/ziya2-13B-base/app.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import gradio as gr
-import plotly.express as px
-import numpy as np
-import wandb
-import os
-import pandas as pd
-
-wandb_key = os.getenv("wandb_key")
-proj_name = os.getenv("proj_name")
-entity_proj_name = os.getenv("entity_proj_name")
-wandb.login(key=wandb_key)
-api = wandb.Api()
-
-step=[]
-loss=[]
-runs = api.runs(entity_proj_name)
-for run in runs:
- # if run.id not in ['dys9531f','kakfv1ab']:
- # continue
- # print(run)
- history = run.scan_history(keys=['train/lm_loss','_step'])
- tmp_step=[]
- tmp_loss=[]
- for row in history:
- # if row['_step'] not in step
- tmp_step.append(row['_step'])
- tmp_loss.append(row['train/lm_loss'])
- if len(tmp_step)>35:
- step.extend(tmp_step)
- loss.extend(tmp_loss)
-
-
-def get_plot(period=1):
- # run = api.run(proj_name)
- # df = run.history()
- # history = run.scan_history(keys=['train/lm_loss','_step'])
- # step=[]
- # loss=[]
- # for row in history:
- # step.append(row['_step'])
- # loss.append(row['train/lm_loss'])
-
- # df=pd.DataFrame({'_step':step,'train/lm_loss':loss})
- # df.shape
-
- runs = api.runs(entity_proj_name)
- for run in runs:
- df = run.history()
- for i,row in df.iterrows():
- if row['_step'] not in step:
- step.append(row['_step'])
- loss.append(row['train/lm_loss'])
-
- df=pd.DataFrame({'_step':step,'train/lm_loss':loss})
- df=df.sort_values('_step', ascending=True)
-
- fig = px.line(df, x='_step', y='train/lm_loss', range_y=[0.8, 3.8], render_mode='webgl')
- return fig
-
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- gr.Markdown("Ziya-Llama2-13B/train/lm_loss")
- plot = gr.Plot(label="Plot (updates every 12 second)")
-
- dep = demo.load(get_plot, None, plot, every=100)
-
-
-
-if __name__ == "__main__":
- demo.queue().launch()
\ No newline at end of file
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/utils_image.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/utils_image.py
deleted file mode 100644
index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/image_degradation/utils_image.py
+++ /dev/null
@@ -1,916 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-
-'''
-# --------------------------------------------
-# Kai Zhang (github: https://github.com/cszn)
-# 03/Mar/2019
-# --------------------------------------------
-# https://github.com/twhui/SRGAN-pyTorch
-# https://github.com/xinntao/BasicSR
-# --------------------------------------------
-'''
-
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-def surf(Z, cmap='rainbow', figsize=None):
- plt.figure(figsize=figsize)
- ax3 = plt.axes(projection='3d')
-
- w, h = Z.shape[:2]
- xx = np.arange(0,w,1)
- yy = np.arange(0,h,1)
- X, Y = np.meshgrid(xx, yy)
- ax3.plot_surface(X,Y,Z,cmap=cmap)
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
- plt.show()
-
-
-'''
-# --------------------------------------------
-# get image pathes
-# --------------------------------------------
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# --------------------------------------------
-# split large images into small images
-# --------------------------------------------
-'''
-
-
-def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
- w, h = img.shape[:2]
- patches = []
- if w > p_max and h > p_max:
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
- w1.append(w-p_size)
- h1.append(h-p_size)
-# print(w1)
-# print(h1)
- for i in w1:
- for j in h1:
- patches.append(img[i:i+p_size, j:j+p_size,:])
- else:
- patches.append(img)
-
- return patches
-
-
-def imssave(imgs, img_path):
- """
- imgs: list, N images of size WxHxC
- """
- img_name, ext = os.path.splitext(os.path.basename(img_path))
-
- for i, img in enumerate(imgs):
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
- cv2.imwrite(new_path, img)
-
-
-def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
- """
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
- will be splitted.
- Args:
- original_dataroot:
- taget_dataroot:
- p_size: size of small images
- p_overlap: patch size in training is a good choice
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
- """
- paths = get_image_paths(original_dataroot)
- for img_path in paths:
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
- img = imread_uint(img_path, n_channels=n_channels)
- patches = patches_from_image(img, p_size, p_overlap, p_max)
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
- #if original_dataroot == taget_dataroot:
- #del img_path
-
-'''
-# --------------------------------------------
-# makedir
-# --------------------------------------------
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# --------------------------------------------
-# read image from path
-# opencv is fast, but read BGR numpy image
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# --------------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-# --------------------------------------------
-# matlab's imwrite
-# --------------------------------------------
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-def imwrite(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-
-# --------------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# --------------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-'''
-# --------------------------------------------
-# image format conversion
-# --------------------------------------------
-# numpy(single) <---> numpy(unit)
-# numpy(single) <---> tensor
-# numpy(unit) <---> tensor
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# numpy(single) [0, 1] <---> numpy(unit)
-# --------------------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint16((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------------------
-# numpy(unit) (HxWxC or HxW) <---> tensor
-# --------------------------------------------
-
-
-# convert uint to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert 2/3/4-dimensional torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------------------
-# numpy(single) (HxWxC) <---> tensor
-# --------------------------------------------
-
-
-# convert single (HxWxC) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert single (HxWxC) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-# convert torch tensor to single
-def tensor2single3(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-def single2tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
-
-
-def single32tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
-
-
-def single42tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# --------------------------------------------
-# Augmentation, flipe and/or rotate
-# --------------------------------------------
-# The following two are enough.
-# (1) augmet_img: numpy image of WxHxC or WxH
-# (2) augment_img_tensor4: tensor image 1xCxWxH
-# --------------------------------------------
-'''
-
-
-def augment_img(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_tensor4(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return img.rot90(1, [2, 3]).flip([2])
- elif mode == 2:
- return img.flip([2])
- elif mode == 3:
- return img.rot90(3, [2, 3])
- elif mode == 4:
- return img.rot90(2, [2, 3]).flip([2])
- elif mode == 5:
- return img.rot90(1, [2, 3])
- elif mode == 6:
- return img.rot90(2, [2, 3])
- elif mode == 7:
- return img.rot90(3, [2, 3]).flip([2])
-
-
-def augment_img_tensor(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# --------------------------------------------
-# modcrop and shave
-# --------------------------------------------
-'''
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-'''
-# --------------------------------------------
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# --------------------------------------------
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# --------------------------------------------
-# metric, PSNR and SSIM
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# PSNR
-# --------------------------------------------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# --------------------------------------------
-# SSIM
-# --------------------------------------------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# --------------------------------------------
-# matlab's bicubic imresize (numpy and torch) [0, 1]
-# --------------------------------------------
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------------------
-# imresize for tensor image [0, 1]
-# --------------------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------------------
-# imresize for numpy image [0, 1]
-# --------------------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- print('---')
-# img = imread_uint('test.bmp', 3)
-# img = uint2single(img)
-# img_bicubic = imresize_np(img, 1/4)
\ No newline at end of file
diff --git a/spaces/Iceclear/StableSR/StableSR/taming/models/dummy_cond_stage.py b/spaces/Iceclear/StableSR/StableSR/taming/models/dummy_cond_stage.py
deleted file mode 100644
index 6e19938078752e09b926a3e749907ee99a258ca0..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/taming/models/dummy_cond_stage.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from torch import Tensor
-
-
-class DummyCondStage:
- def __init__(self, conditional_key):
- self.conditional_key = conditional_key
- self.train = None
-
- def eval(self):
- return self
-
- @staticmethod
- def encode(c: Tensor):
- return c, None, (None, None, c)
-
- @staticmethod
- def decode(c: Tensor):
- return c
-
- @staticmethod
- def to_rgb(c: Tensor):
- return c
diff --git a/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/app.py b/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/app.py
deleted file mode 100644
index f5048d568fa498102fafde0f97598843bb80ce78..0000000000000000000000000000000000000000
--- a/spaces/IoannisTr/Tech_Stocks_Trading_Assistant/app.py
+++ /dev/null
@@ -1,91 +0,0 @@
-from stocks import *
-from functions import *
-from datetime import datetime
-import streamlit as st
-
-st.set_page_config(layout="wide")
-
-st.title("Tech Stocks Trading Assistant")
-
-left_column, right_column = st.columns(2)
-
-with left_column:
-
- all_tickers = {
- "Apple":"AAPL",
- "Microsoft":"MSFT",
- "Nvidia":"NVDA",
- "Paypal":"PYPL",
- "Amazon":"AMZN",
- "Spotify":"SPOT",
- #"Twitter":"TWTR",
- "adanipower":"adanipower.ns",
- "Uber":"UBER",
- "Google":"GOOG"
- }
-
- st.subheader("Technical Analysis Methods")
- option_name = st.selectbox('Choose a stock:', all_tickers.keys())
- option_ticker = all_tickers[option_name]
- execution_timestamp = datetime.now()
- 'You selected: ', option_name, "(",option_ticker,")"
- 'Last execution:', execution_timestamp
-
- s = Stock_Data()
- t = s.Ticker(tick=option_ticker)
-
- m = Models()
-
- with st.spinner('Loading stock data...'):
-
- technical_analysis_methods_outputs = {
- 'Technical Analysis Method': [
- 'Bollinger Bands (20 days & 2 stand. deviations)',
- 'Bollinger Bands (10 days & 1.5 stand. deviations)',
- 'Bollinger Bands (50 days & 3 stand. deviations)',
- 'Moving Average Convergence Divergence (MACD)'
- ],
- 'Outlook': [
- m.bollinger_bands_20d_2std(t),
- m.bollinger_bands_10d_1point5std(t),
- m.bollinger_bands_50d_3std(t),
- m.MACD(t)
- ],
- 'Timeframe of Method': [
- "Medium-term",
- "Short-term",
- "Long-term",
- "Short-term"
- ]
- }
-
- df = pd.DataFrame(technical_analysis_methods_outputs)
-
-
- def color_survived(val):
- color = ""
- if (val=="Sell" or val=="Downtrend and sell signal" or val=="Downtrend and no signal"):
- color="#EE3B3B"
- elif (val=="Buy" or val=="Uptrend and buy signal" or val=="Uptrend and no signal"):
- color="#3D9140"
- else:
- color="#CD950C"
- return f'background-color: {color}'
-
-
- st.table(df.sort_values(['Timeframe of Method'], ascending=False).
- reset_index(drop=True).style.applymap(color_survived, subset=['Outlook']))
-
-with right_column:
-
- st.subheader("FinBERT-based Sentiment Analysis")
-
- with st.spinner("Connecting with www.marketwatch.com..."):
- st.plotly_chart(m.finbert_headlines_sentiment(t)["fig"])
- "Current sentiment:", m.finbert_headlines_sentiment(t)["current_sentiment"], "%"
-
- st.subheader("LSTM-based 7-day stock price prediction model")
-
- with st.spinner("Compiling LSTM model.."):
- st.plotly_chart(m.LSTM_7_days_price_predictor(t))
-
\ No newline at end of file
diff --git a/spaces/ItsJayQz/Classic_Telltale_Diffusion/README.md b/spaces/ItsJayQz/Classic_Telltale_Diffusion/README.md
deleted file mode 100644
index f08f5c7040c7d0150c79ade5416a3afa7499fb73..0000000000000000000000000000000000000000
--- a/spaces/ItsJayQz/Classic_Telltale_Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Classic Telltale Diffusion
-emoji: 🔥
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.14.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/demo_toolbox.py b/spaces/Kevin676/Real-Time-Voice-Cloning/demo_toolbox.py
deleted file mode 100644
index ea30a29275965c7e2b815cd703e891a5ca53e97b..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Real-Time-Voice-Cloning/demo_toolbox.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from pathlib import Path
-from toolbox import Toolbox
-from utils.argutils import print_args
-from utils.modelutils import check_model_paths
-import argparse
-import os
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(
- description="Runs the toolbox",
- formatter_class=argparse.ArgumentDefaultsHelpFormatter
- )
-
- parser.add_argument("-d", "--datasets_root", type=Path, help= \
- "Path to the directory containing your datasets. See toolbox/__init__.py for a list of "
- "supported datasets.", default=None)
- parser.add_argument("-e", "--enc_models_dir", type=Path, default="encoder/saved_models",
- help="Directory containing saved encoder models")
- parser.add_argument("-s", "--syn_models_dir", type=Path, default="synthesizer/saved_models",
- help="Directory containing saved synthesizer models")
- parser.add_argument("-v", "--voc_models_dir", type=Path, default="vocoder/saved_models",
- help="Directory containing saved vocoder models")
- parser.add_argument("--cpu", action="store_true", help=\
- "If True, processing is done on CPU, even when a GPU is available.")
- parser.add_argument("--seed", type=int, default=None, help=\
- "Optional random number seed value to make toolbox deterministic.")
- parser.add_argument("--no_mp3_support", action="store_true", help=\
- "If True, no mp3 files are allowed.")
- args = parser.parse_args()
- print_args(args, parser)
-
- if args.cpu:
- # Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
- del args.cpu
-
- ## Remind the user to download pretrained models if needed
- check_model_paths(encoder_path=args.enc_models_dir, synthesizer_path=args.syn_models_dir,
- vocoder_path=args.voc_models_dir)
-
- # Launch the toolbox
- Toolbox(**vars(args))
diff --git a/spaces/Kevin676/Voice-Cloning/app.py b/spaces/Kevin676/Voice-Cloning/app.py
deleted file mode 100644
index 95fe157698c0d8565f46ce52efa9ed945eaf3d8e..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Voice-Cloning/app.py
+++ /dev/null
@@ -1,192 +0,0 @@
-#from turtle import title
-import gradio as gr
-
-import git
-import os
-os.system('git clone https://github.com/Edresson/Coqui-TTS -b multilingual-torchaudio-SE TTS')
-os.system('pip install -q -e TTS/')
-os.system('pip install -q torchaudio==0.9.0')
-
-os.system('pip install voicefixer --upgrade')
-from voicefixer import VoiceFixer
-voicefixer = VoiceFixer()
-
-
-import sys
-TTS_PATH = "TTS/"
-
-# add libraries into environment
-sys.path.append(TTS_PATH) # set this if TTS is not installed globally
-
-import os
-import string
-import time
-import argparse
-import json
-
-import numpy as np
-import IPython
-from IPython.display import Audio
-
-import torch
-import torchaudio
-from speechbrain.pretrained import SpectralMaskEnhancement
-
-enhance_model = SpectralMaskEnhancement.from_hparams(
-source="speechbrain/metricgan-plus-voicebank",
-savedir="pretrained_models/metricgan-plus-voicebank",
-#run_opts={"device":"cuda"},
-)
-
-from TTS.tts.utils.synthesis import synthesis
-from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols
-try:
- from TTS.utils.audio import AudioProcessor
-except:
- from TTS.utils.audio import AudioProcessor
-
-
-from TTS.tts.models import setup_model
-from TTS.config import load_config
-from TTS.tts.models.vits import *
-
-OUT_PATH = 'out/'
-
-# create output path
-os.makedirs(OUT_PATH, exist_ok=True)
-
-# model vars
-MODEL_PATH = '/home/user/app/best_model_latest.pth.tar'
-CONFIG_PATH = '/home/user/app/config.json'
-TTS_LANGUAGES = "/home/user/app/language_ids.json"
-TTS_SPEAKERS = "/home/user/app/speakers.json"
-USE_CUDA = torch.cuda.is_available()
-
-# load the config
-C = load_config(CONFIG_PATH)
-
-
-# load the audio processor
-ap = AudioProcessor(**C.audio)
-
-speaker_embedding = None
-
-C.model_args['d_vector_file'] = TTS_SPEAKERS
-C.model_args['use_speaker_encoder_as_loss'] = False
-
-model = setup_model(C)
-model.language_manager.set_language_ids_from_file(TTS_LANGUAGES)
-# print(model.language_manager.num_languages, model.embedded_language_dim)
-# print(model.emb_l)
-cp = torch.load(MODEL_PATH, map_location=torch.device('cpu'))
-# remove speaker encoder
-model_weights = cp['model'].copy()
-for key in list(model_weights.keys()):
- if "speaker_encoder" in key:
- del model_weights[key]
-
-model.load_state_dict(model_weights)
-
-
-model.eval()
-
-if USE_CUDA:
- model = model.cuda()
-
-# synthesize voice
-use_griffin_lim = False
-
-os.system('pip install -q pydub ffmpeg-normalize')
-
-CONFIG_SE_PATH = "config_se.json"
-CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar"
-
-from TTS.tts.utils.speakers import SpeakerManager
-from pydub import AudioSegment
-import librosa
-
-SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA)
-
-def compute_spec(ref_file):
- y, sr = librosa.load(ref_file, sr=ap.sample_rate)
- spec = ap.spectrogram(y)
- spec = torch.FloatTensor(spec).unsqueeze(0)
- return spec
-
-
-
-def greet(Text,Voicetoclone,VoiceMicrophone):
- text= "%s" % (Text)
- if Voicetoclone is not None:
- reference_files= "%s" % (Voicetoclone)
- print("path url")
- print(Voicetoclone)
- sample= str(Voicetoclone)
- else:
- reference_files= "%s" % (VoiceMicrophone)
- print("path url")
- print(VoiceMicrophone)
- sample= str(VoiceMicrophone)
- size= len(reference_files)*sys.getsizeof(reference_files)
- size2= size / 1000000
- if (size2 > 0.012) or len(text)>2000:
- message="File is greater than 30mb or Text inserted is longer than 2000 characters. Please re-try with smaller sizes."
- print(message)
- raise SystemExit("File is greater than 30mb. Please re-try or Text inserted is longer than 2000 characters. Please re-try with smaller sizes.")
- else:
- os.system('ffmpeg-normalize $sample -nt rms -t=-27 -o $sample -ar 16000 -f')
- reference_emb = SE_speaker_manager.compute_d_vector_from_clip(reference_files)
- model.length_scale = 1 # scaler for the duration predictor. The larger it is, the slower the speech.
- model.inference_noise_scale = 0.3 # defines the noise variance applied to the random z vector at inference.
- model.inference_noise_scale_dp = 0.3 # defines the noise variance applied to the duration predictor z vector at inference.
- text = text
- model.language_manager.language_id_mapping
- language_id = 0
-
- print(" > text: {}".format(text))
- wav, alignment, _, _ = synthesis(
- model,
- text,
- C,
- "cuda" in str(next(model.parameters()).device),
- ap,
- speaker_id=None,
- d_vector=reference_emb,
- style_wav=None,
- language_id=language_id,
- enable_eos_bos_chars=C.enable_eos_bos_chars,
- use_griffin_lim=True,
- do_trim_silence=False,
- ).values()
- print("Generated Audio")
- IPython.display.display(Audio(wav, rate=ap.sample_rate))
- #file_name = text.replace(" ", "_")
- #file_name = file_name.translate(str.maketrans('', '', string.punctuation.replace('_', ''))) + '.wav'
- file_name="Audio.wav"
- out_path = os.path.join(OUT_PATH, file_name)
- print(" > Saving output to {}".format(out_path))
- ap.save_wav(wav, out_path)
-
- voicefixer.restore(input=out_path, # input wav file path
- output="audio1.wav", # output wav file path
-# cuda=True, # whether to use gpu acceleration'
- cuda = False,
- mode = 0) # You can try out mode 0, 1, or 2 to find out the best result
-
- noisy = enhance_model.load_audio(
- "audio1.wav"
- ).unsqueeze(0)
-
- enhanced = enhance_model.enhance_batch(noisy, lengths=torch.tensor([1.]))
- torchaudio.save("enhanced.wav", enhanced.cpu(), 16000)
-
- return "enhanced.wav"
-
-gr.Interface(
- fn=greet,
- inputs=[gr.inputs.Textbox(label='请输入您想要合成的文字,请自觉合法合规使用!'),gr.Audio(type="filepath", source="upload",label='请上传您喜欢的声音(wav/mp3文件, max. 30mb)'),gr.Audio(source="microphone", type="filepath", label = '请用麦克风上传您喜欢的声音,与文件上传二选一即可')],
- outputs="audio",
- title="🥳💬💕 - Voice Cloning/声音合成测试版(目前只支持英文文本合成,中文版正在开发中,敬请期待)",
- description = "注意❗:请不要生成会对个人以及组织造成侵害的内容,此程序仅供科研、学习使用。用户生成内容与程序开发者无关,请自觉合法合规使用,违反者一切后果自负。",
- article = "🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!",
-).launch()
diff --git a/spaces/KingBlaze1227/PC-PICKERS/style.css b/spaces/KingBlaze1227/PC-PICKERS/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/KingBlaze1227/PC-PICKERS/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/LZRi/LZR-Bert-VITS2/text/tone_sandhi.py b/spaces/LZRi/LZR-Bert-VITS2/text/tone_sandhi.py
deleted file mode 100644
index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000
--- a/spaces/LZRi/LZR-Bert-VITS2/text/tone_sandhi.py
+++ /dev/null
@@ -1,351 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import lazy_pinyin
-from pypinyin import Style
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- # finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- try:
- seg = self._merge_yi(seg)
- except:
- print("_merge_yi failed")
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/LanguageBind/LanguageBind/languagebind/depth/configuration_depth.py b/spaces/LanguageBind/LanguageBind/languagebind/depth/configuration_depth.py
deleted file mode 100644
index 0d3901b2cf96635384c1e7d1e99845a66cd6c786..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/languagebind/depth/configuration_depth.py
+++ /dev/null
@@ -1,425 +0,0 @@
-import copy
-import os
-from typing import Union
-
-from transformers import PretrainedConfig
-from transformers.utils import logging
-
-logger = logging.get_logger(__name__)
-
-
-
-
-
-
-
-class CLIPTextConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`CLIPTextModel`]. It is used to instantiate a CLIP
- text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
- with the defaults will yield a similar configuration to that of the text encoder of the CLIP
- [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- vocab_size (`int`, *optional*, defaults to 49408):
- Vocabulary size of the CLIP text model. Defines the number of different tokens that can be represented by
- the `inputs_ids` passed when calling [`CLIPModel`].
- hidden_size (`int`, *optional*, defaults to 512):
- Dimensionality of the encoder layers and the pooler layer.
- intermediate_size (`int`, *optional*, defaults to 2048):
- Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
- num_hidden_layers (`int`, *optional*, defaults to 12):
- Number of hidden layers in the Transformer encoder.
- num_attention_heads (`int`, *optional*, defaults to 8):
- Number of attention heads for each attention layer in the Transformer encoder.
- max_position_embeddings (`int`, *optional*, defaults to 77):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
- layer_norm_eps (`float`, *optional*, defaults to 1e-5):
- The epsilon used by the layer normalization layers.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- initializer_factor (`float`, *optional*, defaults to 1):
- A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
- testing).
-
- Example:
-
- ```python
- >>> from transformers import CLIPTextConfig, CLIPTextModel
-
- >>> # Initializing a CLIPTextConfig with openai/clip-vit-base-patch32 style configuration
- >>> configuration = CLIPTextConfig()
-
- >>> # Initializing a CLIPTextModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
- >>> model = CLIPTextModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "clip_text_model"
-
- def __init__(
- self,
- vocab_size=49408,
- hidden_size=512,
- intermediate_size=2048,
- projection_dim=512,
- num_hidden_layers=12,
- num_attention_heads=8,
- max_position_embeddings=77,
- hidden_act="quick_gelu",
- layer_norm_eps=1e-5,
- attention_dropout=0.0,
- initializer_range=0.02,
- initializer_factor=1.0,
- # This differs from `CLIPTokenizer`'s default and from openai/clip
- # See https://github.com/huggingface/transformers/pull/24773#issuecomment-1632287538
- pad_token_id=1,
- bos_token_id=49406,
- eos_token_id=49407,
- **kwargs,
- ):
- super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
-
- self.vocab_size = vocab_size
- self.hidden_size = hidden_size
- self.intermediate_size = intermediate_size
- self.projection_dim = projection_dim
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.max_position_embeddings = max_position_embeddings
- self.layer_norm_eps = layer_norm_eps
- self.hidden_act = hidden_act
- self.initializer_range = initializer_range
- self.initializer_factor = initializer_factor
- self.attention_dropout = attention_dropout
- self.add_time_attn = False ######################################
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
- cls._set_token_in_kwargs(kwargs)
-
- config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
-
- # get the text config dict if we are loading from CLIPConfig
- if config_dict.get("model_type") == "clip":
- config_dict = config_dict["text_config"]
-
- if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
- logger.warning(
- f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
- f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
- )
-
- return cls.from_dict(config_dict, **kwargs)
-
-
-
-
-class CLIPVisionConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`CLIPVisionModel`]. It is used to instantiate a
- CLIP vision encoder according to the specified arguments, defining the model architecture. Instantiating a
- configuration with the defaults will yield a similar configuration to that of the vision encoder of the CLIP
- [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- hidden_size (`int`, *optional*, defaults to 768):
- Dimensionality of the encoder layers and the pooler layer.
- intermediate_size (`int`, *optional*, defaults to 3072):
- Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
- num_hidden_layers (`int`, *optional*, defaults to 12):
- Number of hidden layers in the Transformer encoder.
- num_attention_heads (`int`, *optional*, defaults to 12):
- Number of attention heads for each attention layer in the Transformer encoder.
- image_size (`int`, *optional*, defaults to 224):
- The size (resolution) of each image.
- patch_size (`int`, *optional*, defaults to 32):
- The size (resolution) of each patch.
- hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"selu"` and `"gelu_new"` ``"quick_gelu"` are supported.
- layer_norm_eps (`float`, *optional*, defaults to 1e-5):
- The epsilon used by the layer normalization layers.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- initializer_factor (`float`, *optional*, defaults to 1):
- A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
- testing).
-
- Example:
-
- ```python
- >>> from transformers import CLIPVisionConfig, CLIPVisionModel
-
- >>> # Initializing a CLIPVisionConfig with openai/clip-vit-base-patch32 style configuration
- >>> configuration = CLIPVisionConfig()
-
- >>> # Initializing a CLIPVisionModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
- >>> model = CLIPVisionModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
-
- model_type = "clip_vision_model"
-
- def __init__(
- self,
- hidden_size=768,
- intermediate_size=3072,
- projection_dim=512,
- num_hidden_layers=12,
- num_attention_heads=12,
- num_channels=3,
- image_size=224,
- patch_size=32,
- hidden_act="quick_gelu",
- layer_norm_eps=1e-5,
- attention_dropout=0.0,
- initializer_range=0.02,
- initializer_factor=1.0,
-
- add_time_attn=False, ################################
- num_frames=1, ################################
- force_patch_dropout=0.0, ################################
- lora_r=2, ################################
- lora_alpha=16, ################################
- lora_dropout=0.0, ################################
- num_mel_bins=0.0, ################################
- target_length=0.0, ################################
- max_depth=10,
- video_decode_backend='decord', #########################
- **kwargs,
- ):
- super().__init__(**kwargs)
-
- self.hidden_size = hidden_size
- self.intermediate_size = intermediate_size
- self.projection_dim = projection_dim
- self.num_hidden_layers = num_hidden_layers
- self.num_attention_heads = num_attention_heads
- self.num_channels = num_channels
- self.patch_size = patch_size
- self.image_size = image_size
- self.initializer_range = initializer_range
- self.initializer_factor = initializer_factor
- self.attention_dropout = attention_dropout
- self.layer_norm_eps = layer_norm_eps
- self.hidden_act = hidden_act
-
- self.add_time_attn = add_time_attn ################
- self.num_frames = num_frames ################
- self.force_patch_dropout = force_patch_dropout ################
- self.lora_r = lora_r ################
- self.lora_alpha = lora_alpha ################
- self.lora_dropout = lora_dropout ################
- self.num_mel_bins = num_mel_bins ################
- self.target_length = target_length ################
- self.max_depth = max_depth ################
- self.video_decode_backend = video_decode_backend ################
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
- cls._set_token_in_kwargs(kwargs)
-
- config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
-
- # get the vision config dict if we are loading from CLIPConfig
- if config_dict.get("model_type") == "clip":
- config_dict = config_dict["vision_config"]
-
- if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
- logger.warning(
- f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
- f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
- )
-
- return cls.from_dict(config_dict, **kwargs)
-
-
-class LanguageBindDepthConfig(PretrainedConfig):
- r"""
- [`CLIPConfig`] is the configuration class to store the configuration of a [`CLIPModel`]. It is used to instantiate
- a CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating
- a configuration with the defaults will yield a similar configuration to that of the CLIP
- [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- text_config (`dict`, *optional*):
- Dictionary of configuration options used to initialize [`CLIPTextConfig`].
- vision_config (`dict`, *optional*):
- Dictionary of configuration options used to initialize [`CLIPVisionConfig`].
- projection_dim (`int`, *optional*, defaults to 512):
- Dimentionality of text and vision projection layers.
- logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
- The inital value of the *logit_scale* paramter. Default is used as per the original CLIP implementation.
- kwargs (*optional*):
- Dictionary of keyword arguments.
-
- Example:
-
- ```python
- >>> from transformers import CLIPConfig, CLIPModel
-
- >>> # Initializing a CLIPConfig with openai/clip-vit-base-patch32 style configuration
- >>> configuration = CLIPConfig()
-
- >>> # Initializing a CLIPModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
- >>> model = CLIPModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
-
- >>> # We can also initialize a CLIPConfig from a CLIPTextConfig and a CLIPVisionConfig
- >>> from transformers import CLIPTextConfig, CLIPVisionConfig
-
- >>> # Initializing a CLIPText and CLIPVision configuration
- >>> config_text = CLIPTextConfig()
- >>> config_vision = CLIPVisionConfig()
-
- >>> config = CLIPConfig.from_text_vision_configs(config_text, config_vision)
- ```"""
-
- model_type = "LanguageBindDepth"
- is_composition = True
-
- def __init__(
- self, text_config=None, vision_config=None, projection_dim=512, logit_scale_init_value=2.6592, **kwargs
- ):
- # If `_config_dict` exist, we use them for the backward compatibility.
- # We pop out these 2 attributes before calling `super().__init__` to avoid them being saved (which causes a lot
- # of confusion!).
- text_config_dict = kwargs.pop("text_config_dict", None)
- vision_config_dict = kwargs.pop("vision_config_dict", None)
-
- super().__init__(**kwargs)
-
- # Instead of simply assigning `[text|vision]_config_dict` to `[text|vision]_config`, we use the values in
- # `[text|vision]_config_dict` to update the values in `[text|vision]_config`. The values should be same in most
- # cases, but we don't want to break anything regarding `_config_dict` that existed before commit `8827e1b2`.
- if text_config_dict is not None:
- if text_config is None:
- text_config = {}
-
- # This is the complete result when using `text_config_dict`.
- _text_config_dict = CLIPTextConfig(**text_config_dict).to_dict()
-
- # Give a warning if the values exist in both `_text_config_dict` and `text_config` but being different.
- for key, value in _text_config_dict.items():
- if key in text_config and value != text_config[key] and key not in ["transformers_version"]:
- # If specified in `text_config_dict`
- if key in text_config_dict:
- message = (
- f"`{key}` is found in both `text_config_dict` and `text_config` but with different values. "
- f'The value `text_config_dict["{key}"]` will be used instead.'
- )
- # If inferred from default argument values (just to be super careful)
- else:
- message = (
- f"`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The "
- f'value `text_config["{key}"]` will be overriden.'
- )
- logger.warning(message)
-
- # Update all values in `text_config` with the ones in `_text_config_dict`.
- text_config.update(_text_config_dict)
-
- if vision_config_dict is not None:
- if vision_config is None:
- vision_config = {}
-
- # This is the complete result when using `vision_config_dict`.
- _vision_config_dict = CLIPVisionConfig(**vision_config_dict).to_dict()
- # convert keys to string instead of integer
- if "id2label" in _vision_config_dict:
- _vision_config_dict["id2label"] = {
- str(key): value for key, value in _vision_config_dict["id2label"].items()
- }
-
- # Give a warning if the values exist in both `_vision_config_dict` and `vision_config` but being different.
- for key, value in _vision_config_dict.items():
- if key in vision_config and value != vision_config[key] and key not in ["transformers_version"]:
- # If specified in `vision_config_dict`
- if key in vision_config_dict:
- message = (
- f"`{key}` is found in both `vision_config_dict` and `vision_config` but with different "
- f'values. The value `vision_config_dict["{key}"]` will be used instead.'
- )
- # If inferred from default argument values (just to be super careful)
- else:
- message = (
- f"`vision_config_dict` is provided which will be used to initialize `CLIPVisionConfig`. "
- f'The value `vision_config["{key}"]` will be overriden.'
- )
- logger.warning(message)
-
- # Update all values in `vision_config` with the ones in `_vision_config_dict`.
- vision_config.update(_vision_config_dict)
-
- if text_config is None:
- text_config = {}
- logger.info("`text_config` is `None`. Initializing the `CLIPTextConfig` with default values.")
-
- if vision_config is None:
- vision_config = {}
- logger.info("`vision_config` is `None`. initializing the `CLIPVisionConfig` with default values.")
-
- self.text_config = CLIPTextConfig(**text_config)
- self.vision_config = CLIPVisionConfig(**vision_config)
-
- self.projection_dim = projection_dim
- self.logit_scale_init_value = logit_scale_init_value
- self.initializer_factor = 1.0
-
- @classmethod
- def from_text_vision_configs(cls, text_config: CLIPTextConfig, vision_config: CLIPVisionConfig, **kwargs):
- r"""
- Instantiate a [`CLIPConfig`] (or a derived class) from clip text model configuration and clip vision model
- configuration.
-
- Returns:
- [`CLIPConfig`]: An instance of a configuration object
- """
-
- return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs)
-
- def to_dict(self):
- """
- Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`].
-
- Returns:
- `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
- """
- output = copy.deepcopy(self.__dict__)
- output["text_config"] = self.text_config.to_dict()
- output["vision_config"] = self.vision_config.to_dict()
- output["model_type"] = self.__class__.model_type
- return output
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/LanguageBind/LanguageBind/training/file_utils.py b/spaces/LanguageBind/LanguageBind/training/file_utils.py
deleted file mode 100644
index 395cf7df0acc164c6851f17834d793f5852d4605..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/training/file_utils.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import logging
-import os
-import multiprocessing
-import subprocess
-import time
-import fsspec
-import torch
-from tqdm import tqdm
-
-def remote_sync_s3(local_dir, remote_dir):
- # skip epoch_latest which can change during sync.
- result = subprocess.run(["aws", "s3", "sync", local_dir, remote_dir, '--exclude', '*epoch_latest.pt'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- if result.returncode != 0:
- logging.error(f"Error: Failed to sync with S3 bucket {result.stderr.decode('utf-8')}")
- return False
-
- logging.info(f"Successfully synced with S3 bucket")
- return True
-
-def remote_sync_fsspec(local_dir, remote_dir):
- # FIXME currently this is slow and not recommended. Look into speeding up.
- a = fsspec.get_mapper(local_dir)
- b = fsspec.get_mapper(remote_dir)
-
- for k in a:
- # skip epoch_latest which can change during sync.
- if 'epoch_latest.pt' in k:
- continue
-
- logging.info(f'Attempting to sync {k}')
- if k in b and len(a[k]) == len(b[k]):
- logging.debug(f'Skipping remote sync for {k}.')
- continue
-
- try:
- logging.info(f'Successful sync for {k}.')
- b[k] = a[k]
- except Exception as e:
- logging.info(f'Error during remote sync for {k}: {e}')
- return False
-
- return True
-
-def remote_sync(local_dir, remote_dir, protocol):
- logging.info('Starting remote sync.')
- if protocol == 's3':
- return remote_sync_s3(local_dir, remote_dir)
- elif protocol == 'fsspec':
- return remote_sync_fsspec(local_dir, remote_dir)
- else:
- logging.error('Remote protocol not known')
- return False
-
-def keep_running_remote_sync(sync_every, local_dir, remote_dir, protocol):
- while True:
- time.sleep(sync_every)
- remote_sync(local_dir, remote_dir, protocol)
-
-def start_sync_process(sync_every, local_dir, remote_dir, protocol):
- p = multiprocessing.Process(target=keep_running_remote_sync, args=(sync_every, local_dir, remote_dir, protocol))
- return p
-
-# Note: we are not currently using this save function.
-def pt_save(pt_obj, file_path):
- of = fsspec.open(file_path, "wb")
- with of as f:
- torch.save(pt_obj, file_path)
-
-def pt_load(file_path, map_location=None):
- if file_path.startswith('s3'):
- logging.info('Loading remote checkpoint, which may take a bit.')
- of = fsspec.open(file_path, "rb")
- with of as f:
- out = torch.load(f, map_location=map_location)
- return out
-
-def check_exists(file_path):
- try:
- with fsspec.open(file_path):
- pass
- except FileNotFoundError:
- return False
- return True
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/modules.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/modules.py
deleted file mode 100644
index a2d4404a145bc915e6d96ee27ac0fcc408b5e90c..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/modules.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import os
-import traceback
-import logging
-
-logger = logging.getLogger(__name__)
-
-import ffmpeg
-import torch
-
-from assets.configs.config import Config
-from lib.infer.modules.uvr5.mdxnet import MDXNetDereverb
-from lib.infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho
-
-config = Config()
-
-
-def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0):
- infos = []
- try:
- inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- save_root_vocal = (
- save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- save_root_ins = (
- save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- if model_name == "onnx_dereverb_By_FoxJoy":
- pre_fun = MDXNetDereverb(15, config.device)
- else:
- func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho
- pre_fun = func(
- agg=int(agg),
- model_path=os.path.join(
- os.getenv("weight_uvr5_root"), model_name + ".pth"
- ),
- device=config.device,
- is_half=config.is_half,
- )
- if inp_root != "":
- paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)]
- else:
- paths = [path.name for path in paths]
- for path in paths:
- inp_path = os.path.join(inp_root, path)
- need_reformat = 1
- done = 0
- try:
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
- if (
- info["streams"][0]["channels"] == 2
- and info["streams"][0]["sample_rate"] == "44100"
- ):
- need_reformat = 0
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- done = 1
- except:
- need_reformat = 1
- traceback.print_exc()
- if need_reformat == 1:
- tmp_path = "%s/%s.reformatted.wav" % (
- os.path.join(os.environ["temp"]),
- os.path.basename(inp_path),
- )
- os.system(
- "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"
- % (inp_path, tmp_path)
- )
- inp_path = tmp_path
- try:
- if done == 0:
- pre_fun.path_audio(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- infos.append("%s->Success" % (os.path.basename(inp_path)))
- yield "\n".join(infos)
- except:
- try:
- if done == 0:
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- infos.append("%s->Success" % (os.path.basename(inp_path)))
- yield "\n".join(infos)
- except:
- infos.append(
- "%s->%s" % (os.path.basename(inp_path), traceback.format_exc())
- )
- yield "\n".join(infos)
- except:
- infos.append(traceback.format_exc())
- yield "\n".join(infos)
- finally:
- try:
- if model_name == "onnx_dereverb_By_FoxJoy":
- del pre_fun.pred.model
- del pre_fun.pred.model_
- else:
- del pre_fun.model
- del pre_fun
- except:
- traceback.print_exc()
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- logger.info("Executed torch.cuda.empty_cache()")
- yield "\n".join(infos)
diff --git a/spaces/Liviox24/LoanEligibilityPrediction/app.py b/spaces/Liviox24/LoanEligibilityPrediction/app.py
deleted file mode 100644
index 794fe8552db250c6098199f6e338c03c194c1748..0000000000000000000000000000000000000000
--- a/spaces/Liviox24/LoanEligibilityPrediction/app.py
+++ /dev/null
@@ -1,440 +0,0 @@
-# -*- coding: utf-8 -*-
-"""LoanEligibilityPrediction.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/15wGr9tHgIq7Ua4af83Z0UqfAsH8dyOEZ
-
-# IMPORT LIBRERIE
-"""
-
-# Commented out IPython magic to ensure Python compatibility.
-import numpy as np
-import pandas as pd
-import seaborn as sns
-import gradio as gr
-import matplotlib.pyplot as plt
-# %matplotlib inline
-
-from sklearn.model_selection import train_test_split
-from sklearn.preprocessing import MinMaxScaler
-from sklearn.preprocessing import StandardScaler
-
-"""# COLLEZIONE DATI"""
-
-url = "https://raw.githubusercontent.com/livio-24/LoanEligibilityPrediction/main/dataset.csv"
-
-#caricamento dataset in un pandas dataframe
-dataset = pd.read_csv(url)
-
-"""# EXPLORATORY DATA ANALYSIS"""
-
-#prime 5 righe
-dataset.head()
-
-#numero righe e colonne
-dataset.shape
-
-dataset.describe()
-#misure statistiche
-
-#info sulle colonne
-#5 variabili numeriche e 8 variabili categoriche
-dataset.info()
-
-#Distribuzione variabile target
-dataset['Loan_Status'].value_counts()
-
-# numero di valori mancanti in ogni colonna
-# verranno gestiti successivamente nella fase di data cleaning
-dataset.isnull().sum()
-
-#eliminiamo colonna Loan_ID perché inutile
-dataset.drop(columns='Loan_ID', axis = 1, inplace=True)
-
-dataset.head()
-
-"""**DATA VISUALIZATION - ANALISI UNIVARIATA**
-
-VARIABILI CATEGORICHE
-"""
-
-#visualizzazione valori variabili catagoriche in percentuale
-dataset['Gender'].value_counts(normalize=True).plot.bar(title='Gender')
-plt.show()
-dataset['Married'].value_counts(normalize=True).plot.bar(title='Married')
-plt.show()
-dataset['Self_Employed'].value_counts(normalize=True).plot.bar(title='Self_Employed')
-plt.show()
-dataset['Credit_History'].value_counts(normalize=True).plot.bar(title='Credit_History')
-plt.show()
-
-"""Risultati:
-- 80% dei candidati nel dataset è maschio
-- Circa il 65% dei candidati nel dataset è sposato/a
-- Circa il 15% lavora in proprio
-- Circa l'85% ha ripagato i propri debiti
-
-VARIABILI ORDINALI
-"""
-
-#visualizzazione valori variabili ordinali in percentuale
-dataset['Dependents'].value_counts(normalize=True).plot.bar(title='Dependents')
-plt.show()
-dataset['Education'].value_counts(normalize=True).plot.bar(title='Education')
-plt.show()
-dataset['Property_Area'].value_counts(normalize=True).plot.bar(title='Property_Area')
-plt.show()
-
-"""Risultati:
-- La maggior parte dei candidati non ha familiari dipendenti
-- Circa l'80% dei candidati ha una laurea
-- La maggior parte dei candidati vive in un'area semiurbana
-
-VARIABILI NUMERICHE
-"""
-
-#visualizzazione distribuzione variabile 'ApplicantIncome'
-sns.distplot(dataset['ApplicantIncome'])
-plt.show()
-#boxplot per individuazione outliers
-dataset.boxplot(['ApplicantIncome'])
-plt.show()
-
-#visualizzazione distribuzione variabile 'CoapplicantIncome'
-sns.distplot(dataset['CoapplicantIncome'])
-plt.show()
-#boxplot per individuazione outliers
-dataset.boxplot(['CoapplicantIncome'])
-plt.show()
-
-#visualizzazione distribuzione variabile 'LoanAmount'
-sns.distplot(dataset['LoanAmount'])
-plt.show()
-dataset.boxplot(['LoanAmount'])
-plt.show()
-
-#dataset['LoanAmount'].hist(bins=20)
-
-#visualizzazione distribuzione variabile 'Loan_Amount_Term'
-sns.distplot(dataset['Loan_Amount_Term'])
-plt.show()
-dataset.boxplot(['Loan_Amount_Term'])
-plt.show()
-
-"""La maggior parte delle features numeriche ha degli outliers
-
-**Matrice di correlazione**
-"""
-
-correlation_matrix = dataset.corr()
-
-# heat map per visualizzare matrice di correlazione
-sns.heatmap(correlation_matrix, cbar=True, fmt='.1f', annot=True, cmap='coolwarm')
-#plt.savefig('Correlation Heat map', bbox_inches='tight')
-
-"""Non ci sono molte variabili correlate tra di loro, le uniche due sono ApplicantIncome - LoanAmount"""
-
-#conversione variabili categoriche in numeriche
-dataset.replace({'Gender':{'Male':0, 'Female':1}, 'Married' :{'No':0, 'Yes':1}, 'Education':{'Not Graduate':0, 'Graduate':1}, 'Self_Employed':{'No':0, 'Yes':1}, 'Property_Area':{'Rural':0, 'Urban':1, 'Semiurban':2}, 'Loan_Status':{'N':0, 'Y':1}}, inplace = True)
-
-
-# replacing the value of 3+ to 4
-dataset['Dependents'].replace(to_replace='3+', value=4, inplace=True)
-
-"""# DATA CLEANING
-
-**CONTROLLO VALORI MANCANTI**
-"""
-
-dataset.isnull().sum()
-
-#Sostituiamo i valori mancanti con la moda per le variabili categoriche
-dataset['Gender'].fillna(dataset['Gender'].mode()[0], inplace=True)
-dataset['Married'].fillna(dataset['Married'].mode()[0], inplace=True)
-dataset['Dependents'].fillna(dataset['Dependents'].mode()[0], inplace=True)
-dataset['Self_Employed'].fillna(dataset['Self_Employed'].mode()[0], inplace=True)
-dataset['Credit_History'].fillna(dataset['Credit_History'].mode()[0], inplace=True)
-
-#Utilizziamo la mediana poiché la variabile ha degli outliers, quindi non è un buon approccio utilizzare la media
-dataset['LoanAmount'].fillna(dataset['LoanAmount'].median(), inplace=True)
-#dataset['LoanAmount'].fillna(dataset['LoanAmount'].mean(), inplace=True)
-
-dataset['Loan_Amount_Term'].value_counts()
-
-#Nella variabile Loan_Amount_Term possiamo notare che 360 è il valore che si ripete di più, quindi utilizziamo la moda
-dataset['Loan_Amount_Term'].fillna(dataset['Loan_Amount_Term'].mode()[0], inplace=True)
-
-dataset.isnull().sum()
-
-#Per trasformare Dtype di Dependents in int
-dataset['Dependents'] = dataset['Dependents'].astype(str).astype(int)
-dataset.info()
-
-"""**GESTIONE OUTLIERS**"""
-
-fig, axs = plt.subplots(2, 2, figsize=(10, 8))
-
-#Distribuzioni prima di applicare log
-sns.histplot(data=dataset, x="ApplicantIncome", kde=True, ax=axs[0, 0], color='green')
-sns.histplot(data=dataset, x="CoapplicantIncome", kde=True, ax=axs[0, 1], color='skyblue')
-sns.histplot(data=dataset, x="LoanAmount", kde=True, ax=axs[1, 0], color='orange')
-
-# Log Transformation per normalizzare la distribuzione
-
-dataset.ApplicantIncome = np.log(dataset.ApplicantIncome)
-dataset.CoapplicantIncome = np.log(dataset.CoapplicantIncome + 1)
-dataset.LoanAmount = np.log(dataset.LoanAmount)
-
-fig, axs = plt.subplots(2, 2, figsize=(10, 8))
-
-#Distribuzioni dopo aver applicato log
-sns.histplot(data=dataset, x="ApplicantIncome", kde=True, ax=axs[0, 0], color='green')
-sns.histplot(data=dataset, x="CoapplicantIncome", kde=True, ax=axs[0, 1], color='skyblue')
-sns.histplot(data=dataset, x="LoanAmount", kde=True, ax=axs[1, 0], color='orange')
-
-"""Possiamo notare che la distribuzione è migliorata dopo aver applicato il logaritmo
-
-# SPLIT DATASET
-"""
-
-#definizione variabili dipendenti e indipendenti
-
-x = dataset.drop('Loan_Status', axis = 1)
-y = dataset['Loan_Status']
-
-#split dataset
-
-X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42, stratify = y)
-
-print("X_train dataset: ", X_train.shape)
-print("y_train dataset: ", y_train.shape)
-print("X_test dataset: ", X_test.shape)
-print("y_test dataset: ", y_test.shape)
-
-y_test.value_counts()
-
-#Distribuzione della variabile dipendente
-plt.figure(figsize=(5,5))
-pd.value_counts(dataset['Loan_Status']).plot.bar()
-plt.xlabel('Loan_Status')
-plt.ylabel('Frequency')
-dataset['Loan_Status'].value_counts()
-plt.savefig('target_distr', bbox_inches='tight')
-
-"""# DATA SCALING"""
-
-#Normalizzazione
-scaler = MinMaxScaler(feature_range=(0, 1))
-X_train = scaler.fit_transform(X_train)
-X_test = scaler.fit_transform(X_test)
-
-#z-score
-#scaler = StandardScaler()
-#X_train=scaler.fit_transform(X_train)
-#X_test=scaler.transform(X_test)
-
-df = pd.DataFrame(X_train, columns = x.columns)
-
-df
-
-"""# FEATURE SELECTION"""
-
-#feature selection supervisionata
-
-from sklearn.feature_selection import SelectKBest
-from sklearn.feature_selection import chi2, f_classif
-from numpy import set_printoptions
-
-fs = SelectKBest(score_func=chi2,k=5)
-fs.fit_transform(X_train, y_train)
-
-X_new_train = fs.transform(X_train)
-X_new_test = fs.transform(X_test)
-print(X_new_train.shape)
-
-x.columns[fs.get_support(indices=True)]
-print("features selezionate: ", x.columns[fs.get_support(indices=True)].tolist())
-
-"""# COSTRUZIONE MODELLI"""
-
-models = []
-precision = []
-accuracy = []
-recall = []
-f1 = []
-
-"""**LOGISTIC REGRESSION**"""
-
-from sklearn.linear_model import LogisticRegression
-from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix, accuracy_score ,recall_score, precision_score, f1_score
-
-logisticRegr = LogisticRegression()
-logisticRegr.fit(X_new_train, y_train)
-
-y_train_pred = logisticRegr.predict(X_new_train)
-y_test_pred = logisticRegr.predict(X_new_test)
-
-fig, ax = plt.subplots(figsize=(8, 8))
-plot_confusion_matrix(logisticRegr, X_new_test, y_test, ax=ax)
-plt.show()
-#print(confusion_matrix(y_test, y_test_pred))
-
-#Risultati ottenuti
-print(classification_report(y_test, y_test_pred))
-print("Accuracy on training data:",accuracy_score(y_train, y_train_pred))
-print("Accuracy on test data:",accuracy_score(y_test, y_test_pred))
-
-models.append('Logistic Regression')
-accuracy.append(accuracy_score(y_test, y_test_pred))
-recall.append(recall_score(y_test, y_test_pred))
-precision.append(precision_score(y_test, y_test_pred))
-f1.append(f1_score(y_test, y_test_pred))
-
-"""**DECISION TREE**"""
-
-from sklearn.tree import DecisionTreeClassifier
-
-tree_model = DecisionTreeClassifier( random_state=42)
-tree_model.fit(X_new_train, y_train)
-
-y_train_pred = tree_model.predict(X_new_train)
-y_test_pred = tree_model.predict(X_new_test)
-
-fig, ax = plt.subplots(figsize=(8, 8))
-plot_confusion_matrix(logisticRegr, X_new_test, y_test, ax=ax)
-plt.show()
-
-print(classification_report(y_test, y_test_pred))
-print("Accuracy on training data:",accuracy_score(y_train, y_train_pred))
-print("Accuracy on test data:",accuracy_score(y_test, y_test_pred))
-
-models.append('Decision Tree')
-accuracy.append(accuracy_score(y_test, y_test_pred))
-recall.append(recall_score(y_test, y_test_pred))
-precision.append(precision_score(y_test, y_test_pred))
-f1.append(f1_score(y_test, y_test_pred))
-
-"""**NAIVE BAYES**"""
-
-from sklearn.naive_bayes import GaussianNB
-
-NB = GaussianNB()
-NB.fit(X_new_train, y_train)
-
-y_train_pred = NB.predict(X_new_train)
-y_test_pred = NB.predict(X_new_test)
-
-fig, ax = plt.subplots(figsize=(8, 8))
-plot_confusion_matrix(NB, X_new_test, y_test, ax=ax)
-plt.show()
-
-print(classification_report(y_test, y_test_pred))
-print("Accuracy on training data:",accuracy_score(y_train, y_train_pred))
-print("Accuracy on test data:",accuracy_score(y_test, y_test_pred))
-
-models.append('Naive Bayes')
-accuracy.append(accuracy_score(y_test, y_test_pred))
-recall.append(recall_score(y_test, y_test_pred))
-precision.append(precision_score(y_test, y_test_pred))
-f1.append(f1_score(y_test, y_test_pred))
-
-"""**RANDOM FOREST**"""
-
-from sklearn.ensemble import RandomForestClassifier
-
-RandomForest = RandomForestClassifier()
-RandomForest.fit(X_new_train, y_train)
-
-y_train_pred = RandomForest.predict(X_new_train)
-y_test_pred = RandomForest.predict(X_new_test)
-
-fig, ax = plt.subplots(figsize=(8, 8))
-plot_confusion_matrix(RandomForest, X_new_test, y_test, ax=ax)
-plt.show()
-
-print(classification_report(y_test, y_test_pred))
-print("Accuracy on training data:",accuracy_score(y_train, y_train_pred))
-print("Accuracy on test data:",accuracy_score(y_test, y_test_pred))
-
-models.append('Random Forest')
-accuracy.append(accuracy_score(y_test, y_test_pred))
-recall.append(recall_score(y_test, y_test_pred))
-precision.append(precision_score(y_test, y_test_pred))
-f1.append(f1_score(y_test, y_test_pred))
-
-"""**XGBOOST**"""
-
-from xgboost import XGBClassifier
-
-XGB = XGBClassifier()
-XGB.fit(X_new_train, y_train)
-
-y_train_pred = XGB.predict(X_new_train)
-y_test_pred = XGB.predict(X_new_test)
-
-fig, ax = plt.subplots(figsize=(8, 8))
-plot_confusion_matrix(XGB, X_new_test, y_test, ax=ax)
-plt.show()
-
-print(classification_report(y_test, y_test_pred))
-print("Accuracy on training data:",accuracy_score(y_train, y_train_pred))
-print("Accuracy on test data:",accuracy_score(y_test, y_test_pred))
-
-models.append('XGBoost')
-accuracy.append(accuracy_score(y_test, y_test_pred))
-recall.append(recall_score(y_test, y_test_pred))
-precision.append(precision_score(y_test, y_test_pred))
-f1.append(f1_score(y_test, y_test_pred))
-
-"""**CONFRONTO METRICHE**"""
-
-compare = pd.DataFrame({'Model': models,
- 'Accuracy': accuracy,
- 'Precision': precision,
- 'Recall': recall,
- 'f1_score': f1})
-compare.sort_values(by='Accuracy', ascending=False)
-#print(compare.to_latex())
-
-def loan(Gender, Married, Dependents, Education, Self_Employed, ApplicantIncome, CoapplicantIncome, LoanAmount, Loan_Amount_Term, Credit_History, Property_Area):
-#turning the arguments into a numpy array
- Marr = 0 if Married == 'No' else 1
- Educ = 0 if Education == 'Not Graduate' else 1
- CredHis = 0 if Credit_History == '0: bad credit history' else 1
- Dep = 4 if Dependents == '3+' else Dependents
- Gen = 0 if Gender == 'Male' else 1
- Self_Empl = 0 if Self_Employed == 'No' else 1
- if Property_Area == 'Rural': PA = 0
- elif Property_Area == 'Urban': PA = 1
- else: PA = 2
-
-
-
- instance = np.array([Marr, Educ, CoapplicantIncome, CredHis, PA, Gen, Self_Empl, Dependents, ApplicantIncome, LoanAmount, Loan_Amount_Term])
-
-
- #reshaping into 2D array
- instance_resh = instance.reshape(1,-1)
- new_instance_resh = scaler.transform(instance_resh)
- new_instance_resh = np.delete(new_instance_resh, [5,6,7,8,9,10], axis=1)
- prediction = logisticRegr.predict(new_instance_resh)
-
- return ("Loan approved" if prediction[0] == 1 else "Loan not approved")
-
-app = gr.Interface(fn=loan,
- inputs=[gr.Radio(['Male', 'Female']),
- gr.Radio(['Yes', 'No']),
- gr.Radio(['0', '1', '2', '3+']),
- gr.Radio(['Graduate', 'Not Graduate']),
- gr.Radio(['Yes', 'No']),
- "number",
- "number",
- "number",
- "number",
- gr.Radio(['0: bad credit history', '1: good credit history']),
- gr.Radio(['Urban', 'Semiurban', 'Rural'])],
- outputs="text",
- title = "Loan Eligibility Prediction")
-app.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/abinet.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/abinet.py
deleted file mode 100644
index 19c6b66731f0b205741037ece8d6b49f91d0110b..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/abinet.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# num_chars depends on the configuration of label_convertor. The actual
-# dictionary size is 36 + 1 ().
-# TODO: Automatically update num_chars based on the configuration of
-# label_convertor
-num_chars = 37
-max_seq_len = 26
-
-label_convertor = dict(
- type='ABIConvertor',
- dict_type='DICT36',
- with_unknown=False,
- with_padding=False,
- lower=True,
-)
-
-model = dict(
- type='ABINet',
- backbone=dict(type='ResNetABI'),
- encoder=dict(
- type='ABIVisionModel',
- encoder=dict(
- type='TransformerEncoder',
- n_layers=3,
- n_head=8,
- d_model=512,
- d_inner=2048,
- dropout=0.1,
- max_len=8 * 32,
- ),
- decoder=dict(
- type='ABIVisionDecoder',
- in_channels=512,
- num_channels=64,
- attn_height=8,
- attn_width=32,
- attn_mode='nearest',
- use_result='feature',
- num_chars=num_chars,
- max_seq_len=max_seq_len,
- init_cfg=dict(type='Xavier', layer='Conv2d')),
- ),
- decoder=dict(
- type='ABILanguageDecoder',
- d_model=512,
- n_head=8,
- d_inner=2048,
- n_layers=4,
- dropout=0.1,
- detach_tokens=True,
- use_self_attn=False,
- pad_idx=num_chars - 1,
- num_chars=num_chars,
- max_seq_len=max_seq_len,
- init_cfg=None),
- fuser=dict(
- type='ABIFuser',
- d_model=512,
- num_chars=num_chars,
- init_cfg=None,
- max_seq_len=max_seq_len,
- ),
- loss=dict(
- type='ABILoss',
- enc_weight=1.0,
- dec_weight=1.0,
- fusion_weight=1.0,
- num_classes=num_chars),
- label_convertor=label_convertor,
- max_seq_len=max_seq_len,
- iter_size=3)
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/sanskrit.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/vocoder/hifigan/__init__.py b/spaces/Marne/MockingBird/mockingbirdforuse/vocoder/hifigan/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/threejs/dat.gui.min.js b/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/threejs/dat.gui.min.js
deleted file mode 100644
index 8f5808005e760f05eabfd454f73a0c1d8f5c3e6e..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/threejs/dat.gui.min.js
+++ /dev/null
@@ -1,14 +0,0 @@
-/**
- * dat-gui JavaScript Controller Library
- * http://code.google.com/p/dat-gui
- *
- * Copyright 2011 Data Arts Team, Google Creative Lab
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- */
-!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.dat=t():e.dat=t()}(this,function(){return function(e){function t(o){if(n[o])return n[o].exports;var i=n[o]={exports:{},id:o,loaded:!1};return e[o].call(i.exports,i,i.exports,t),i.loaded=!0,i.exports}var n={};return t.m=e,t.c=n,t.p="",t(0)}([function(e,t,n){"use strict";t.__esModule=!0,t["default"]=n(1),e.exports=t["default"]},function(e,t,n){"use strict";t.__esModule=!0,t["default"]={color:{Color:n(2),math:n(6),interpret:n(3)},controllers:{Controller:n(7),BooleanController:n(8),OptionController:n(10),StringController:n(11),NumberController:n(12),NumberControllerBox:n(13),NumberControllerSlider:n(14),FunctionController:n(15),ColorController:n(16)},dom:{dom:n(9)},gui:{GUI:n(17)},GUI:n(17)},e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function a(e,t,n){Object.defineProperty(e,t,{get:function(){return"RGB"===this.__state.space?this.__state[t]:(p.recalculateRGB(this,t,n),this.__state[t])},set:function(e){"RGB"!==this.__state.space&&(p.recalculateRGB(this,t,n),this.__state.space="RGB"),this.__state[t]=e}})}function r(e,t){Object.defineProperty(e,t,{get:function(){return"HSV"===this.__state.space?this.__state[t]:(p.recalculateHSV(this),this.__state[t])},set:function(e){"HSV"!==this.__state.space&&(p.recalculateHSV(this),this.__state.space="HSV"),this.__state[t]=e}})}t.__esModule=!0;var s=n(3),l=o(s),d=n(6),u=o(d),c=n(4),f=o(c),h=n(5),_=o(h),p=function(){function e(){if(i(this,e),this.__state=l["default"].apply(this,arguments),this.__state===!1)throw new Error("Failed to interpret color arguments");this.__state.a=this.__state.a||1}return e.prototype.toString=function(){return f["default"](this)},e.prototype.toOriginal=function(){return this.__state.conversion.write(this)},e}();p.recalculateRGB=function(e,t,n){if("HEX"===e.__state.space)e.__state[t]=u["default"].component_from_hex(e.__state.hex,n);else{if("HSV"!==e.__state.space)throw new Error("Corrupted color state");_["default"].extend(e.__state,u["default"].hsv_to_rgb(e.__state.h,e.__state.s,e.__state.v))}},p.recalculateHSV=function(e){var t=u["default"].rgb_to_hsv(e.r,e.g,e.b);_["default"].extend(e.__state,{s:t.s,v:t.v}),_["default"].isNaN(t.h)?_["default"].isUndefined(e.__state.h)&&(e.__state.h=0):e.__state.h=t.h},p.COMPONENTS=["r","g","b","h","s","v","hex","a"],a(p.prototype,"r",2),a(p.prototype,"g",1),a(p.prototype,"b",0),r(p.prototype,"h"),r(p.prototype,"s"),r(p.prototype,"v"),Object.defineProperty(p.prototype,"a",{get:function(){return this.__state.a},set:function(e){this.__state.a=e}}),Object.defineProperty(p.prototype,"hex",{get:function(){return"HEX"!==!this.__state.space&&(this.__state.hex=u["default"].rgb_to_hex(this.r,this.g,this.b)),this.__state.hex},set:function(e){this.__state.space="HEX",this.__state.hex=e}}),t["default"]=p,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(4),a=o(i),r=n(5),s=o(r),l=[{litmus:s["default"].isString,conversions:{THREE_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9])([A-F0-9])([A-F0-9])$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString()+t[1].toString()+t[2].toString()+t[2].toString()+t[3].toString()+t[3].toString(),0)}},write:a["default"]},SIX_CHAR_HEX:{read:function(e){var t=e.match(/^#([A-F0-9]{6})$/i);return null!==t&&{space:"HEX",hex:parseInt("0x"+t[1].toString(),0)}},write:a["default"]},CSS_RGB:{read:function(e){var t=e.match(/^rgb\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3])}},write:a["default"]},CSS_RGBA:{read:function(e){var t=e.match(/^rgba\(\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*,\s*(.+)\s*\)/);return null!==t&&{space:"RGB",r:parseFloat(t[1]),g:parseFloat(t[2]),b:parseFloat(t[3]),a:parseFloat(t[4])}},write:a["default"]}}},{litmus:s["default"].isNumber,conversions:{HEX:{read:function(e){return{space:"HEX",hex:e,conversionName:"HEX"}},write:function(e){return e.hex}}}},{litmus:s["default"].isArray,conversions:{RGB_ARRAY:{read:function(e){return 3===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2]}},write:function(e){return[e.r,e.g,e.b]}},RGBA_ARRAY:{read:function(e){return 4===e.length&&{space:"RGB",r:e[0],g:e[1],b:e[2],a:e[3]}},write:function(e){return[e.r,e.g,e.b,e.a]}}}},{litmus:s["default"].isObject,conversions:{RGBA_OBJ:{read:function(e){return!!(s["default"].isNumber(e.r)&&s["default"].isNumber(e.g)&&s["default"].isNumber(e.b)&&s["default"].isNumber(e.a))&&{space:"RGB",r:e.r,g:e.g,b:e.b,a:e.a}},write:function(e){return{r:e.r,g:e.g,b:e.b,a:e.a}}},RGB_OBJ:{read:function(e){return!!(s["default"].isNumber(e.r)&&s["default"].isNumber(e.g)&&s["default"].isNumber(e.b))&&{space:"RGB",r:e.r,g:e.g,b:e.b}},write:function(e){return{r:e.r,g:e.g,b:e.b}}},HSVA_OBJ:{read:function(e){return!!(s["default"].isNumber(e.h)&&s["default"].isNumber(e.s)&&s["default"].isNumber(e.v)&&s["default"].isNumber(e.a))&&{space:"HSV",h:e.h,s:e.s,v:e.v,a:e.a}},write:function(e){return{h:e.h,s:e.s,v:e.v,a:e.a}}},HSV_OBJ:{read:function(e){return!!(s["default"].isNumber(e.h)&&s["default"].isNumber(e.s)&&s["default"].isNumber(e.v))&&{space:"HSV",h:e.h,s:e.s,v:e.v}},write:function(e){return{h:e.h,s:e.s,v:e.v}}}}}],d=void 0,u=void 0,c=function(){u=!1;var e=arguments.length>1?s["default"].toArray(arguments):arguments[0];return s["default"].each(l,function(t){if(t.litmus(e))return s["default"].each(t.conversions,function(t,n){if(d=t.read(e),u===!1&&d!==!1)return u=d,d.conversionName=n,d.conversion=t,s["default"].BREAK}),s["default"].BREAK}),u};t["default"]=c,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(5),a=o(i);t["default"]=function(e){if(1===e.a||a["default"].isUndefined(e.a)){for(var t=e.hex.toString(16);t.length<6;)t="0"+t;return"#"+t}return"rgba("+Math.round(e.r)+","+Math.round(e.g)+","+Math.round(e.b)+","+e.a+")"},e.exports=t["default"]},function(e,t){"use strict";t.__esModule=!0;var n=Array.prototype.forEach,o=Array.prototype.slice,i={BREAK:{},extend:function(e){return this.each(o.call(arguments,1),function(t){if(!this.isUndefined(t)){var n=Object.keys(t);n.forEach(function(n){this.isUndefined(t[n])||(e[n]=t[n])}.bind(this))}},this),e},defaults:function(e){return this.each(o.call(arguments,1),function(t){if(!this.isUndefined(t)){var n=Object.keys(t);n.forEach(function(n){this.isUndefined(e[n])&&(e[n]=t[n])}.bind(this))}},this),e},compose:function(){var e=o.call(arguments);return function(){for(var t=o.call(arguments),n=e.length-1;n>=0;n--)t=[e[n].apply(this,t)];return t[0]}},each:function(e,t,o){if(e)if(n&&e.forEach&&e.forEach===n)e.forEach(t,o);else if(e.length===e.length+0){var i=void 0,a=void 0;for(i=0,a=e.length;i>8*t&255},hex_with_component:function(e,t,o){return o<<(n=8*t)|e&~(255<-1?t.length-t.indexOf(".")-1:0}t.__esModule=!0;var s=n(7),l=o(s),d=n(5),u=o(d),c=function(e){function t(n,o,a){i(this,t),e.call(this,n,o);var s=a||{};this.__min=s.min,this.__max=s.max,this.__step=s.step,u["default"].isUndefined(this.__step)?0===this.initialValue?this.__impliedStep=1:this.__impliedStep=Math.pow(10,Math.floor(Math.log(Math.abs(this.initialValue))/Math.LN10))/10:this.__impliedStep=this.__step,this.__precision=r(this.__impliedStep)}return a(t,e),t.prototype.setValue=function(t){var n=t;return void 0!==this.__min&&nthis.__max&&(n=this.__max),void 0!==this.__step&&n%this.__step!==0&&(n=Math.round(n/this.__step)*this.__step),e.prototype.setValue.call(this,n)},t.prototype.min=function(e){return this.__min=e,this},t.prototype.max=function(e){return this.__max=e,this},t.prototype.step=function(e){return this.__step=e,this.__impliedStep=e,this.__precision=r(e),this},t}(l["default"]);t["default"]=c,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function r(e,t){var n=Math.pow(10,t);return Math.round(e*n)/n}t.__esModule=!0;var s=n(12),l=o(s),d=n(9),u=o(d),c=n(5),f=o(c),h=function(e){function t(n,o,a){function r(){var e=parseFloat(h.__input.value);f["default"].isNaN(e)||h.setValue(e)}function s(){r(),h.__onFinishChange&&h.__onFinishChange.call(h,h.getValue())}function l(e){document.activeElement.blur();var t=_-e.clientY;h.setValue(h.getValue()+t*h.__impliedStep),_=e.clientY}function d(){u["default"].unbind(window,"mousemove",l),u["default"].unbind(window,"mouseup",d)}function c(e){u["default"].bind(window,"mousemove",l),u["default"].bind(window,"mouseup",d),_=e.clientY}i(this,t),e.call(this,n,o,a),this.__truncationSuspended=!1;var h=this,_=void 0;this.__input=document.createElement("input"),this.__input.setAttribute("type","text"),u["default"].bind(this.__input,"change",r),u["default"].bind(this.__input,"blur",s),u["default"].bind(this.__input,"mousedown",c),u["default"].bind(this.__input,"keydown",function(e){13===e.keyCode&&(h.__truncationSuspended=!0,this.blur(),h.__truncationSuspended=!1)}),this.updateDisplay(),this.domElement.appendChild(this.__input)}return a(t,e),t.prototype.updateDisplay=function(){return u["default"].isActive(this.__input)?this:(this.__input.value=this.__truncationSuspended?this.getValue():r(this.getValue(),this.__precision),e.prototype.updateDisplay.call(this))},t}(l["default"]);t["default"]=h,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function r(e,t,n,o,i){return o+(i-o)*((e-t)/(n-t))}t.__esModule=!0;var s=n(12),l=o(s),d=n(9),u=o(d),c=function(e){function t(n,o,a,s,l){function d(e){document.activeElement.blur(),u["default"].bind(window,"mousemove",c),u["default"].bind(window,"mouseup",f),c(e)}function c(e){e.preventDefault();var t=u["default"].getOffset(h.__background),n=u["default"].getWidth(h.__background);return h.setValue(r(e.clientX,t.left,t.left+n,h.__min,h.__max)),!1}function f(){u["default"].unbind(window,"mousemove",c),u["default"].unbind(window,"mouseup",f),h.__onFinishChange&&h.__onFinishChange.call(h,h.getValue())}i(this,t),e.call(this,n,o,{min:a,max:s,step:l});var h=this;this.__background=document.createElement("div"),this.__foreground=document.createElement("div"),u["default"].bind(this.__background,"mousedown",d),u["default"].addClass(this.__background,"slider"),u["default"].addClass(this.__foreground,"slider-fg"),this.updateDisplay(),this.__background.appendChild(this.__foreground),this.domElement.appendChild(this.__background)}return a(t,e),t.prototype.updateDisplay=function(){var t=(this.getValue()-this.__min)/(this.__max-this.__min);return this.__foreground.style.width=100*t+"%",e.prototype.updateDisplay.call(this)},t}(l["default"]);t["default"]=c,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}t.__esModule=!0;var r=n(7),s=o(r),l=n(9),d=o(l),u=function(e){function t(n,o,a){i(this,t),e.call(this,n,o);var r=this;this.__button=document.createElement("div"),this.__button.innerHTML=void 0===a?"Fire":a,d["default"].bind(this.__button,"click",function(e){return e.preventDefault(),r.fire(),!1}),d["default"].addClass(this.__button,"button"),this.domElement.appendChild(this.__button)}return a(t,e),t.prototype.fire=function(){this.__onChange&&this.__onChange.call(this),this.getValue().call(this.object),this.__onFinishChange&&this.__onFinishChange.call(this,this.getValue())},t}(s["default"]);t["default"]=u,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}function a(e,t){if("function"!=typeof t&&null!==t)throw new TypeError("Super expression must either be null or a function, not "+typeof t);e.prototype=Object.create(t&&t.prototype,{constructor:{value:e,enumerable:!1,writable:!0,configurable:!0}}),t&&(Object.setPrototypeOf?Object.setPrototypeOf(e,t):e.__proto__=t)}function r(e,t,n,o){e.style.background="",b["default"].each(v,function(i){e.style.cssText+="background: "+i+"linear-gradient("+t+", "+n+" 0%, "+o+" 100%); "})}function s(e){e.style.background="",e.style.cssText+="background: -moz-linear-gradient(top, #ff0000 0%, #ff00ff 17%, #0000ff 34%, #00ffff 50%, #00ff00 67%, #ffff00 84%, #ff0000 100%);",e.style.cssText+="background: -webkit-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -o-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: -ms-linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);",e.style.cssText+="background: linear-gradient(top, #ff0000 0%,#ff00ff 17%,#0000ff 34%,#00ffff 50%,#00ff00 67%,#ffff00 84%,#ff0000 100%);"}t.__esModule=!0;var l=n(7),d=o(l),u=n(9),c=o(u),f=n(2),h=o(f),_=n(3),p=o(_),m=n(5),b=o(m),g=function(e){function t(n,o){function a(e){_(e),c["default"].bind(window,"mousemove",_),c["default"].bind(window,"mouseup",l)}function l(){c["default"].unbind(window,"mousemove",_),c["default"].unbind(window,"mouseup",l),f()}function d(){var e=p["default"](this.value);e!==!1?(g.__color.__state=e,g.setValue(g.__color.toOriginal())):this.value=g.__color.toString()}function u(){c["default"].unbind(window,"mousemove",m),c["default"].unbind(window,"mouseup",u),f()}function f(){g.__onFinishChange&&g.__onFinishChange.call(g,g.__color.toString())}function _(e){e.preventDefault();var t=c["default"].getWidth(g.__saturation_field),n=c["default"].getOffset(g.__saturation_field),o=(e.clientX-n.left+document.body.scrollLeft)/t,i=1-(e.clientY-n.top+document.body.scrollTop)/t;return i>1?i=1:i<0&&(i=0),o>1?o=1:o<0&&(o=0),g.__color.v=i,g.__color.s=o,g.setValue(g.__color.toOriginal()),!1}function m(e){e.preventDefault();var t=c["default"].getHeight(g.__hue_field),n=c["default"].getOffset(g.__hue_field),o=1-(e.clientY-n.top+document.body.scrollTop)/t;return o>1?o=1:o<0&&(o=0),g.__color.h=360*o,g.setValue(g.__color.toOriginal()),!1}i(this,t),e.call(this,n,o),this.__color=new h["default"](this.getValue()),this.__temp=new h["default"](0);var g=this;this.domElement=document.createElement("div"),c["default"].makeSelectable(this.domElement,!1),this.__selector=document.createElement("div"),this.__selector.className="selector",this.__saturation_field=document.createElement("div"),this.__saturation_field.className="saturation-field",this.__field_knob=document.createElement("div"),this.__field_knob.className="field-knob",this.__field_knob_border="2px solid ",this.__hue_knob=document.createElement("div"),this.__hue_knob.className="hue-knob",this.__hue_field=document.createElement("div"),this.__hue_field.className="hue-field",this.__input=document.createElement("input"),this.__input.type="text",this.__input_textShadow="0 1px 1px ",c["default"].bind(this.__input,"keydown",function(e){13===e.keyCode&&d.call(this)}),c["default"].bind(this.__input,"blur",d),c["default"].bind(this.__selector,"mousedown",function(){c["default"].addClass(this,"drag").bind(window,"mouseup",function(){c["default"].removeClass(g.__selector,"drag")})});var v=document.createElement("div");b["default"].extend(this.__selector.style,{width:"122px",height:"102px",padding:"3px",backgroundColor:"#222",boxShadow:"0px 1px 3px rgba(0,0,0,0.3)"}),b["default"].extend(this.__field_knob.style,{position:"absolute",width:"12px",height:"12px",border:this.__field_knob_border+(this.__color.v<.5?"#fff":"#000"),boxShadow:"0px 1px 3px rgba(0,0,0,0.5)",borderRadius:"12px",zIndex:1}),b["default"].extend(this.__hue_knob.style,{position:"absolute",width:"15px",height:"2px",borderRight:"4px solid #fff",zIndex:1}),b["default"].extend(this.__saturation_field.style,{width:"100px",height:"100px",border:"1px solid #555",marginRight:"3px",display:"inline-block",cursor:"pointer"}),b["default"].extend(v.style,{width:"100%",height:"100%",background:"none"}),r(v,"top","rgba(0,0,0,0)","#000"),b["default"].extend(this.__hue_field.style,{width:"15px",height:"100px",border:"1px solid #555",cursor:"ns-resize",position:"absolute",top:"3px",right:"3px"}),s(this.__hue_field),b["default"].extend(this.__input.style,{outline:"none",textAlign:"center",color:"#fff",border:0,fontWeight:"bold",textShadow:this.__input_textShadow+"rgba(0,0,0,0.7)"}),c["default"].bind(this.__saturation_field,"mousedown",a),c["default"].bind(this.__field_knob,"mousedown",a),c["default"].bind(this.__hue_field,"mousedown",function(e){m(e),c["default"].bind(window,"mousemove",m),c["default"].bind(window,"mouseup",u)}),this.__saturation_field.appendChild(v),this.__selector.appendChild(this.__field_knob),this.__selector.appendChild(this.__saturation_field),this.__selector.appendChild(this.__hue_field),this.__hue_field.appendChild(this.__hue_knob),this.domElement.appendChild(this.__input),this.domElement.appendChild(this.__selector),this.updateDisplay()}return a(t,e),t.prototype.updateDisplay=function(){var e=p["default"](this.getValue());if(e!==!1){var t=!1;b["default"].each(h["default"].COMPONENTS,function(n){if(!b["default"].isUndefined(e[n])&&!b["default"].isUndefined(this.__color.__state[n])&&e[n]!==this.__color.__state[n])return t=!0,{}},this),t&&b["default"].extend(this.__color.__state,e)}b["default"].extend(this.__temp.__state,this.__color.__state),this.__temp.a=1;var n=this.__color.v<.5||this.__color.s>.5?255:0,o=255-n;b["default"].extend(this.__field_knob.style,{marginLeft:100*this.__color.s-7+"px",marginTop:100*(1-this.__color.v)-7+"px",backgroundColor:this.__temp.toString(),border:this.__field_knob_border+"rgb("+n+","+n+","+n+")"}),this.__hue_knob.style.marginTop=100*(1-this.__color.h/360)+"px",this.__temp.s=1,this.__temp.v=1,r(this.__saturation_field,"left","#fff",this.__temp.toString()),b["default"].extend(this.__input.style,{backgroundColor:this.__input.value=this.__color.toString(),color:"rgb("+n+","+n+","+n+")",textShadow:this.__input_textShadow+"rgba("+o+","+o+","+o+",.7)"})},t}(d["default"]),v=["-moz-","-o-","-webkit-","-ms-",""];t["default"]=g,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t,n){var o=document.createElement("li");return t&&o.appendChild(t),n?e.__ul.insertBefore(o,n):e.__ul.appendChild(o),e.onResize(),o}function a(e,t){var n=e.__preset_select[e.__preset_select.selectedIndex];t?n.innerHTML=n.value+"*":n.innerHTML=n.value}function r(e,t,n){if(n.__li=t,n.__gui=e,U["default"].extend(n,{options:function(t){if(arguments.length>1){var o=n.__li.nextElementSibling;return n.remove(),l(e,n.object,n.property,{before:o,factoryArgs:[U["default"].toArray(arguments)]})}if(U["default"].isArray(t)||U["default"].isObject(t)){var o=n.__li.nextElementSibling;return n.remove(),l(e,n.object,n.property,{before:o,factoryArgs:[t]})}},name:function(e){return n.__li.firstElementChild.firstElementChild.innerHTML=e,n},listen:function(){return n.__gui.listen(n),n},remove:function(){return n.__gui.remove(n),n}}),n instanceof R["default"])!function(){var e=new N["default"](n.object,n.property,{min:n.__min,max:n.__max,step:n.__step});U["default"].each(["updateDisplay","onChange","onFinishChange","step"],function(t){var o=n[t],i=e[t];n[t]=e[t]=function(){var t=Array.prototype.slice.call(arguments);return i.apply(e,t),o.apply(n,t)}}),I["default"].addClass(t,"has-slider"),n.domElement.insertBefore(e.domElement,n.domElement.firstElementChild)}();else if(n instanceof N["default"]){var o=function(t){return U["default"].isNumber(n.__min)&&U["default"].isNumber(n.__max)?(n.remove(),l(e,n.object,n.property,{before:n.__li.nextElementSibling,factoryArgs:[n.__min,n.__max,n.__step]})):t};n.min=U["default"].compose(o,n.min),n.max=U["default"].compose(o,n.max)}else n instanceof S["default"]?(I["default"].bind(t,"click",function(){I["default"].fakeEvent(n.__checkbox,"click")}),I["default"].bind(n.__checkbox,"click",function(e){e.stopPropagation()})):n instanceof T["default"]?(I["default"].bind(t,"click",function(){I["default"].fakeEvent(n.__button,"click")}),I["default"].bind(t,"mouseover",function(){I["default"].addClass(n.__button,"hover")}),I["default"].bind(t,"mouseout",function(){I["default"].removeClass(n.__button,"hover")})):n instanceof j["default"]&&(I["default"].addClass(t,"color"),n.updateDisplay=U["default"].compose(function(e){return t.style.borderLeftColor=n.__color.toString(),
-e},n.updateDisplay),n.updateDisplay());n.setValue=U["default"].compose(function(t){return e.getRoot().__preset_select&&n.isModified()&&a(e.getRoot(),!0),t},n.setValue)}function s(e,t){var n=e.getRoot(),o=n.__rememberedObjects.indexOf(t.object);if(o!==-1){var i=n.__rememberedObjectIndecesToControllers[o];if(void 0===i&&(i={},n.__rememberedObjectIndecesToControllers[o]=i),i[t.property]=t,n.load&&n.load.remembered){var a=n.load.remembered,r=void 0;if(a[e.preset])r=a[e.preset];else{if(!a[Q])return;r=a[Q]}if(r[o]&&void 0!==r[o][t.property]){var s=r[o][t.property];t.initialValue=s,t.setValue(s)}}}}function l(e,t,n,o){if(void 0===t[n])throw new Error('Object "'+t+'" has no property "'+n+'"');var a=void 0;if(o.color)a=new j["default"](t,n);else{var l=[t,n].concat(o.factoryArgs);a=E["default"].apply(e,l)}o.before instanceof A["default"]&&(o.before=o.before.__li),s(e,a),I["default"].addClass(a.domElement,"c");var d=document.createElement("span");I["default"].addClass(d,"property-name"),d.innerHTML=a.property;var u=document.createElement("div");u.appendChild(d),u.appendChild(a.domElement);var c=i(e,u,o.before);return I["default"].addClass(c,ne.CLASS_CONTROLLER_ROW),a instanceof j["default"]?I["default"].addClass(c,"color"):I["default"].addClass(c,typeof a.getValue()),r(e,c,a),e.__controllers.push(a),a}function d(e,t){return document.location.href+"."+t}function u(e,t,n){var o=document.createElement("option");o.innerHTML=t,o.value=t,e.__preset_select.appendChild(o),n&&(e.__preset_select.selectedIndex=e.__preset_select.length-1)}function c(e,t){t.style.display=e.useLocalStorage?"block":"none"}function f(e){var t=e.__save_row=document.createElement("li");I["default"].addClass(e.domElement,"has-save"),e.__ul.insertBefore(t,e.__ul.firstChild),I["default"].addClass(t,"save-row");var n=document.createElement("span");n.innerHTML=" ",I["default"].addClass(n,"button gears");var o=document.createElement("span");o.innerHTML="Save",I["default"].addClass(o,"button"),I["default"].addClass(o,"save");var i=document.createElement("span");i.innerHTML="New",I["default"].addClass(i,"button"),I["default"].addClass(i,"save-as");var a=document.createElement("span");a.innerHTML="Revert",I["default"].addClass(a,"button"),I["default"].addClass(a,"revert");var r=e.__preset_select=document.createElement("select");e.load&&e.load.remembered?U["default"].each(e.load.remembered,function(t,n){u(e,n,n===e.preset)}):u(e,Q,!1),I["default"].bind(r,"change",function(){for(var t=0;t0&&(e.preset=this.preset,e.remembered||(e.remembered={}),e.remembered[this.preset]=p(this)),e.folders={},U["default"].each(this.__folders,function(t,n){e.folders[n]=t.getSaveObject()}),e},save:function(){this.load.remembered||(this.load.remembered={}),this.load.remembered[this.preset]=p(this),a(this,!1),this.saveToLocalStorageIfPossible()},saveAs:function(e){this.load.remembered||(this.load.remembered={},this.load.remembered[Q]=p(this,!0)),this.load.remembered[e]=p(this),this.preset=e,u(this,e,!0),this.saveToLocalStorageIfPossible()},revert:function(e){U["default"].each(this.__controllers,function(t){this.getRoot().load.remembered?s(e||this.getRoot(),t):t.setValue(t.initialValue),t.__onFinishChange&&t.__onFinishChange.call(t,t.getValue())},this),U["default"].each(this.__folders,function(e){e.revert(e)}),e||a(this.getRoot(),!1)},listen:function(e){var t=0===this.__listening.length;this.__listening.push(e),t&&b(this.__listening)},updateDisplay:function(){U["default"].each(this.__controllers,function(e){e.updateDisplay()}),U["default"].each(this.__folders,function(e){e.updateDisplay()})}}),e.exports=ne},function(e,t){"use strict";e.exports={load:function(e,t){var n=t||document,o=n.createElement("link");o.type="text/css",o.rel="stylesheet",o.href=e,n.getElementsByTagName("head")[0].appendChild(o)},inject:function(e,t){var n=t||document,o=document.createElement("style");o.type="text/css",o.innerHTML=e;var i=n.getElementsByTagName("head")[0];try{i.appendChild(o)}catch(a){}}}},function(e,t){e.exports='Here\'s the new load parameter for your
GUI
\'s constructor:
Automatically save values to
localStorage
on exit.
The values saved to localStorage
will override those passed to dat.GUI
\'s constructor. This makes it easier to work incrementally, but localStorage
is fragile, and your friends may not see the same values you do.
'},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}t.__esModule=!0;var i=n(10),a=o(i),r=n(13),s=o(r),l=n(14),d=o(l),u=n(11),c=o(u),f=n(15),h=o(f),_=n(8),p=o(_),m=n(5),b=o(m),g=function(e,t){var n=e[t];return b["default"].isArray(arguments[2])||b["default"].isObject(arguments[2])?new a["default"](e,t,arguments[2]):b["default"].isNumber(n)?b["default"].isNumber(arguments[2])&&b["default"].isNumber(arguments[3])?b["default"].isNumber(arguments[4])?new d["default"](e,t,arguments[2],arguments[3],arguments[4]):new d["default"](e,t,arguments[2],arguments[3]):b["default"].isNumber(arguments[4])?new s["default"](e,t,{min:arguments[2],max:arguments[3],step:arguments[4]}):new s["default"](e,t,{min:arguments[2],max:arguments[3]}):b["default"].isString(n)?new c["default"](e,t):b["default"].isFunction(n)?new h["default"](e,t,""):b["default"].isBoolean(n)?new p["default"](e,t):null};t["default"]=g,e.exports=t["default"]},function(e,t){"use strict";function n(e){setTimeout(e,1e3/60)}t.__esModule=!0,t["default"]=window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||n,e.exports=t["default"]},function(e,t,n){"use strict";function o(e){return e&&e.__esModule?e:{"default":e}}function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}t.__esModule=!0;var a=n(9),r=o(a),s=n(5),l=o(s),d=function(){function e(){i(this,e),this.backgroundElement=document.createElement("div"),l["default"].extend(this.backgroundElement.style,{backgroundColor:"rgba(0,0,0,0.8)",top:0,left:0,display:"none",zIndex:"1000",opacity:0,WebkitTransition:"opacity 0.2s linear",transition:"opacity 0.2s linear"}),r["default"].makeFullscreen(this.backgroundElement),this.backgroundElement.style.position="fixed",this.domElement=document.createElement("div"),l["default"].extend(this.domElement.style,{position:"fixed",display:"none",zIndex:"1001",opacity:0,WebkitTransition:"-webkit-transform 0.2s ease-out, opacity 0.2s linear",transition:"transform 0.2s ease-out, opacity 0.2s linear"}),document.body.appendChild(this.backgroundElement),document.body.appendChild(this.domElement);var t=this;r["default"].bind(this.backgroundElement,"click",function(){t.hide()})}return e.prototype.show=function(){var e=this;this.backgroundElement.style.display="block",this.domElement.style.display="block",this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)",this.layout(),l["default"].defer(function(){e.backgroundElement.style.opacity=1,e.domElement.style.opacity=1,e.domElement.style.webkitTransform="scale(1)"})},e.prototype.hide=function t(){var e=this,t=function n(){e.domElement.style.display="none",e.backgroundElement.style.display="none",r["default"].unbind(e.domElement,"webkitTransitionEnd",n),r["default"].unbind(e.domElement,"transitionend",n),r["default"].unbind(e.domElement,"oTransitionEnd",n)};r["default"].bind(this.domElement,"webkitTransitionEnd",t),r["default"].bind(this.domElement,"transitionend",t),r["default"].bind(this.domElement,"oTransitionEnd",t),this.backgroundElement.style.opacity=0,this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)"},e.prototype.layout=function(){this.domElement.style.left=window.innerWidth/2-r["default"].getWidth(this.domElement)/2+"px",this.domElement.style.top=window.innerHeight/2-r["default"].getHeight(this.domElement)/2+"px"},e}();t["default"]=d,e.exports=t["default"]},function(e,t,n){t=e.exports=n(24)(),t.push([e.id,".dg ul{list-style:none;margin:0;padding:0;width:100%;clear:both}.dg.ac{position:fixed;top:0;left:0;right:0;height:0;z-index:0}.dg:not(.ac) .main{overflow:hidden}.dg.main{-webkit-transition:opacity .1s linear;transition:opacity .1s linear}.dg.main.taller-than-window{overflow-y:auto}.dg.main.taller-than-window .close-button{opacity:1;margin-top:-1px;border-top:1px solid #2c2c2c}.dg.main ul.closed .close-button{opacity:1!important}.dg.main .close-button.drag,.dg.main:hover .close-button{opacity:1}.dg.main .close-button{-webkit-transition:opacity .1s linear;transition:opacity .1s linear;border:0;position:absolute;line-height:19px;height:20px;cursor:pointer;text-align:center;background-color:#000}.dg.main .close-button:hover{background-color:#111}.dg.a{float:right;margin-right:15px;overflow-x:hidden}.dg.a.has-save>ul{margin-top:27px}.dg.a.has-save>ul.closed{margin-top:0}.dg.a .save-row{position:fixed;top:0;z-index:1002}.dg li{-webkit-transition:height .1s ease-out;transition:height .1s ease-out}.dg li:not(.folder){cursor:auto;height:27px;line-height:27px;overflow:hidden;padding:0 4px 0 5px}.dg li.folder{padding:0;border-left:4px solid transparent}.dg li.title{cursor:pointer;margin-left:-4px}.dg .closed li:not(.title),.dg .closed ul li,.dg .closed ul li>*{height:0;overflow:hidden;border:0}.dg .cr{clear:both;padding-left:3px;height:27px}.dg .property-name{cursor:default;float:left;clear:left;width:40%;overflow:hidden;text-overflow:ellipsis}.dg .c{float:left;width:60%}.dg .c input[type=text]{border:0;margin-top:4px;padding:3px;width:100%;float:right}.dg .has-slider input[type=text]{width:30%;margin-left:0}.dg .slider{float:left;width:66%;margin-left:-5px;margin-right:0;height:19px;margin-top:4px}.dg .slider-fg{height:100%}.dg .c input[type=checkbox]{margin-top:9px}.dg .c select{margin-top:5px}.dg .cr.boolean,.dg .cr.boolean *,.dg .cr.function,.dg .cr.function *,.dg .cr.function .property-name{cursor:pointer}.dg .selector{display:none;position:absolute;margin-left:-9px;margin-top:23px;z-index:10}.dg .c:hover .selector,.dg .selector.drag{display:block}.dg li.save-row{padding:0}.dg li.save-row .button{display:inline-block;padding:0 6px}.dg.dialogue{background-color:#222;width:460px;padding:15px;font-size:13px;line-height:15px}#dg-new-constructor{padding:10px;color:#222;font-family:Monaco,monospace;font-size:10px;border:0;resize:none;box-shadow:inset 1px 1px 1px #888;word-wrap:break-word;margin:9pt 0;display:block;width:440px;overflow-y:scroll;height:75pt;position:relative}#dg-local-explain{display:none;font-size:11px;line-height:17px;border-radius:3px;background-color:#333;padding:8px;margin-top:10px}#dg-local-explain code{font-size:10px}#dat-gui-save-locally{display:none}.dg{color:#eee;font:11px 'Lucida Grande',sans-serif;text-shadow:0 -1px 0 #111}.dg.main::-webkit-scrollbar{width:5px;background:#1a1a1a}.dg.main::-webkit-scrollbar-corner{height:0;display:none}.dg.main::-webkit-scrollbar-thumb{border-radius:5px;background:#676767}.dg li:not(.folder){background:#1a1a1a;border-bottom:1px solid #2c2c2c}.dg li.save-row{line-height:25px;background:#dad5cb;border:0}.dg li.save-row select{margin-left:5px;width:81pt}.dg li.save-row .button{margin-left:5px;margin-top:1px;border-radius:2px;font-size:9px;line-height:7px;padding:4px 4px 5px;background:#c5bdad;color:#fff;text-shadow:0 1px 0 #b0a58f;box-shadow:0 -1px 0 #b0a58f;cursor:pointer}.dg li.save-row .button.gears{background:#c5bdad url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAANCAYAAAB/9ZQ7AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAQJJREFUeNpiYKAU/P//PwGIC/ApCABiBSAW+I8AClAcgKxQ4T9hoMAEUrxx2QSGN6+egDX+/vWT4e7N82AMYoPAx/evwWoYoSYbACX2s7KxCxzcsezDh3evFoDEBYTEEqycggWAzA9AuUSQQgeYPa9fPv6/YWm/Acx5IPb7ty/fw+QZblw67vDs8R0YHyQhgObx+yAJkBqmG5dPPDh1aPOGR/eugW0G4vlIoTIfyFcA+QekhhHJhPdQxbiAIguMBTQZrPD7108M6roWYDFQiIAAv6Aow/1bFwXgis+f2LUAynwoIaNcz8XNx3Dl7MEJUDGQpx9gtQ8YCueB+D26OECAAQDadt7e46D42QAAAABJRU5ErkJggg==) 2px 1px no-repeat;height:7px;width:8px}.dg li.save-row .button:hover{background-color:#bab19e;box-shadow:0 -1px 0 #b0a58f}.dg li.folder{border-bottom:0}.dg li.title{padding-left:1pc;background:#000 url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlI+hKgFxoCgAOw==) 6px 10px no-repeat;cursor:pointer;border-bottom:1px solid hsla(0,0%,100%,.2)}.dg .closed li.title{background-image:url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlGIWqMCbWAEAOw==)}.dg .cr.boolean{border-left:3px solid #806787}.dg .cr.color{border-left:3px solid}.dg .cr.function{border-left:3px solid #e61d5f}.dg .cr.number{border-left:3px solid #2fa1d6}.dg .cr.number input[type=text]{color:#2fa1d6}.dg .cr.string{border-left:3px solid #1ed36f}.dg .cr.string input[type=text]{color:#1ed36f}.dg .cr.boolean:hover,.dg .cr.function:hover{background:#111}.dg .c input[type=text]{background:#303030;outline:0}.dg .c input[type=text]:hover{background:#3c3c3c}.dg .c input[type=text]:focus{background:#494949;color:#fff}.dg .c .slider{background:#303030;cursor:ew-resize}.dg .c .slider-fg{background:#2fa1d6;max-width:100%}.dg .c .slider:hover{background:#3c3c3c}.dg .c .slider:hover .slider-fg{background:#44abda}",""])},function(e,t){e.exports=function(){var e=[];return e.toString=function(){for(var e=[],t=0;t int:
- """Extract the character position from the JSONDecodeError message.
-
- Args:
- error_message (str): The error message from the JSONDecodeError
- exception.
-
- Returns:
- int: The character position.
- """
-
- char_pattern = re.compile(r"\(char (\d+)\)")
- if match := char_pattern.search(error_message):
- return int(match[1])
- else:
- raise ValueError("Character position not found in the error message.")
-
-
-def validate_json(json_object: object, schema_name: object) -> object:
- """
- :type schema_name: object
- :param schema_name:
- :type json_object: object
- """
- with open(f"autogpt/json_utils/{schema_name}.json", "r") as f:
- schema = json.load(f)
- validator = Draft7Validator(schema)
-
- if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path):
- logger.error("The JSON object is invalid.")
- if CFG.debug_mode:
- logger.error(
- json.dumps(json_object, indent=4)
- ) # Replace 'json_object' with the variable containing the JSON data
- logger.error("The following issues were found:")
-
- for error in errors:
- logger.error(f"Error: {error.message}")
- elif CFG.debug_mode:
- print("The JSON object is valid.")
-
- return json_object
diff --git a/spaces/MaximeTut/Emploi2021/app.py b/spaces/MaximeTut/Emploi2021/app.py
deleted file mode 100644
index 7f3164e9aac3e025cf6c8c8c301eec02849c82e1..0000000000000000000000000000000000000000
--- a/spaces/MaximeTut/Emploi2021/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-import pandas as pd
-import json
-import matplotlib.pyplot as plt
-import streamlit as st
-import streamlit.components.v1 as stc
-import plotly.express as px
-import seaborn as sns
-from streamlit_option_menu import option_menu
-
-sns.set()
-logo = "https://www.ville-creteil.fr/img/Une-logo-pole-emploi.jpg"
-logo2 = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAoHCBISEhISEhESERESEhESEBEREhESEhAOFxMYGBcTFxcbICwkGx0pIBcXJTYlKS49MzMzGiI5PjkxPSwyMzABCwsLEA4QHhISHTIpIiAyMjAyMDIyMjIyMjAyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMv/AABEIAOEA4QMBIgACEQEDEQH/xAAcAAEAAQUBAQAAAAAAAAAAAAAABwECBAUGAwj/xABIEAACAQMBBAUGCgkBCAMAAAABAgADBBESBQYhMQcTQVFhIlJxgZGhFCMyQlRikrHB0RYXQ3JzgpOy0qJEU2ODlMLh8BUzNP/EABoBAQADAQEBAAAAAAAAAAAAAAADBAUBAgb/xAAzEQACAgECAwUFBwUAAAAAAAAAAQIRAwQSITFBExRRYXEFMoGRwQYjM7HR4fAiUmKhsv/aAAwDAQACEQMRAD8AmaIiAIiIAiIgCIiAIiIAiUiAVieLV0Xm6j0sBPI39H/ep9oTtM45JdTLiYq31E8qqfaE9lqqeTA+ggzjVBST5M9IlIzB0rERAEREAREQBERAEREAREQBERAEREAREQBKRNFtbeKnRyqfGVBzAPkqfrH8BPUYOTpI8ZMkcauTo3bsAMkgAcyTgTT3m8NFMhSajDzfk/aP4TkbzadWufLYkdiLwUeqVo2jH5R0+A4mXIaVLjNmbPXSnwxL4s2txvJWb5OmmPAZPtMwHuqtTm7t6zj2Ce1Ogi9mT3njMgPiSpQj7qIWsk/fkzXi1c/M9pAl4s37h7RM4PK653ezz3eJgGyqeaD/ADCWmhUXjoYeK/8AibPrIFSN7HYRMCntKvT5O48Gyfvmytt53XhUQOO9fJP5SxiDzAPp4zGq2VNuXknw/KeXHHLmj1HtcfuyOmstsUKuAr6W81/JPq7D6pspG1xZVE4jyx3rzHqmVs/eCrRIBOtBzVzxA8D2SKek6wZZx69p1lVef7Hfys12zNq0rgZRsMPlI3Bl9Xb6ZsJTaadM0IyUladorEROHoREQBERAEREAREQBERAE8qtRUUsxCqBkknAAirUVFLMQqqCSTwAA7ZG+8m8bXLlEJWgp4DtqHzm8O4SXDheV0uXiV9RqI4Y2+fRGw27vQ1QmnQJWnyL8mf0dwmltrdn4ngvf2meVpb/ADm9S/iZsA80Uo41tgZNTzPfk+R70lVBhRjx7TPUVJia5XXPNE/BcEZeuV62YgeVDzlAyw8qHmJrldcUDL6yV6yYgeVFScoGWKkqHmJ1kqKkAy9cxrm0Sp9VvOH498ojk8sn0cZ7rSc8kc/ytF0ccdypo0VValBwwJUg+S6mdfu/vOtUinWIWpyVuSue7waaypbOVIam5B55UzndoWTUjqw2jPAkEFT4/nPUowzKnzIovJpnuhy6olzMrOL3U3l1lbeu3l8qdQ/P+o3j3HtnZzOyY5Y5VI2MOWOWO6JWIieCUREQBERAEREAShlZzG+23fglDSh+OrZVO9E+c/q5DxM9Qg5yUV1PGSahFylyRz++28PWObak3xan4xh8+oPm/uj75oLGj89v5R+M19mmtsnkOJ8TNsHmsorHHYjFW7LPtJ/AyhUldcxRUgVJ5omMvXKh5idZK9ZAMrXK9ZMUPK9ZFAyusldUxg8r1k4DJ1y5WJOBxJ5AcyZZZW9Ss4p0xqY+xR3nuE7rZGxadAA411O1z2eCjsEiyZVAlx4nP0NHY7v1amGc9WvceLEejsm9tth0E5qXPe5z7uU2mIlOWWUi7HDCPQ80oqvBVVR4ACemIjMjJRiWPSVgQyhgeYIBEvzKwDSXu7NrV4mkEbmGpEowPfw4e6XVtoLaCmlwzFG8hbggY1di1Mcjjt5HE3Ew9pWSXFJ6TjKsMeIPYw8QYnKbjSfpYxQxxnclwfOuZkUqquAykMpGQwOQR4GesiS22tc7MuHpMS6BsPTb5LL2OvmnHdJK2RtWldUxUpHI5Mp+Ujeaw7DIcWZT4cmuhd1ehnp0pc4PlJfXwNjERJikIiIAiIgFjuACScAAknuAkI7ybXN3dO4JKZ6ukO6mDj38/XJF6Q9qG3snVTh65FJccwp4ufsgj1yJtnrltXYvL0zQ0UKTyP0Rma+Tk1jXqzb0BoUD2+JnqHmLrlQ8sEKMrXKh5jBpXVAMjXKh5i65UPAMrXKipMXXK9ZB0yg89KCs7KiAszEKoHaTMLXO13E2bnVcuOWUpZ/1P+HtkeWeyNnvHHfKjoth7KW2pheBdsGo3aT3egTaRKzMbbds1EklSE8q1VUUszBVAyWJwAJWo4UEkgAAkk8gB2yNN4dutcuQpIoofIXzvrt4/dPeLG8jojy5VjR0O0N8FBK0E1/XfIX1DnNO+89037QL4KiAe8Gc9rlesl+OCC6FGWecup0NPea6X9oG8GRD9wE3FhveCQKyafrpkj1rznD9ZK64lghLoI5px6kvUKyuodGDK3EFTkGehkZ7B221s44k0mPxif8AeO4j3ySadQOoZTlWAKkciD2yjlxODL2LKsiOI6R9k66a3SDyqfkVMfOpk+S3qPuM4fd7bj2VYVFJKHAqJ2VE/Mdkmq8tlq03puMq6srA9xGJAm07ZqNWpSb5VNmQ+ozM1ENs1NH13sTJHPhlp8itL/l/o+RPlldpWppVpkMjqGRh2gzJkadF22eL2jnhxqUs9h+co9XH2yS5bhLdGzA1ulelzSxvpy810ERE9lUShlZQwCIulXaGu6SiD5NGnkj67nJ9wE5uz8lB48Z5bz3nXX1zU55rMo/dU6B90vVsDHdNnHHbjSMbI92RyMrVGuY2uNc7RwytUuDzE1yuqcoGTrldUxtUrrnaBk6pUPMbVKhooGZTBdlVeLMQoHiTgSZdm2oo0qdIckUL6TjifbIq3Nodbe0QeIQtUb0IpI9+mTBKGrlxUS9pI8HISspEqFw5TfzaXV0VpKcNWJz/AA15+04EjzVN1v5ea71lzwpIlMekjU393unOa5pYIbYLzMzPLdN+Rk6o1TH1yuuTUQmRrl2uYuqVDQDJ1zv9xdodZSeixy1Igr/Cbl7CDI41zo9xbrTeKueFRHQ+kDUP7ZFnhug/ImwSqa8yTZEXSbZ9XdhwMLXpo/8AzAdLe7TJcM4XpTtdVvSq4406pBP1XXP3qJkahXD0Pp/Y2bs9ZH/K18+X+6I52NfG3r0awPyHRj4jV5Q9mZ9A0nDAMOIYAg+BGRPnAcx6RJ43SuOssLVicnqUUnvKeSf7ZFp3zRq/aPDwx5V5r6r6m7iIlk+WE8bh9KO3mqzewEz2mDthsW1we6hVP+gwD5yR9T6j85ix9JOZnB5rbc8V9A+6ZmqbrMNs99cuDzG1RqnDlmVrgVJja5a1dR2+yDtmZqldc1rXZ7B7Zabp/D2RQ4m11yuqan4U3f7oF2/h7Io9EldGCarms/mUQo/ncf4yTwZF/RBVLteZA4LQ5eJf8pJ8y9T+IzS034aLhEtjMrlgjTeTda8qXVWpTpiqlRy4KugIyB5JDEcRNX+iW0Pozf1KX+UmCJYjqppVwK0tLBu+JD/6JbQ+jH+pT/ylf0T2h9GP9Sl/lJfjM9d7n5HO6Q8yIf0T2h9Gb+pS/wAo/RTaH0Zvt0v8pL0pHep+CHdIeLIj/RO/+jN9ul/lNtu1uzd07qnUqUxSp021El0JbgRpAUnvkiywmJaiTVcOJ1aaEXfEGc3v9S12Ff6pR/YwnR5mk3wGbG6/hMZVn7rNDSScc8Gv7l+ZB0nHcMEbOtc+bUPqNVyJCtnavWqLTpqWd2Cqo7yefok/7JshQoUqI49VTRM95A4n25lXTq3Z9L9o8q7OGPrd/BKjOiIlo+TExNppqoVl86lUHtQiZcsdcgjvBHtgHy/SPL0CZOqWXtHq6tSmedOo6fZcj8JbmbidmJJcT11S1nxPNnxPImDiiej1CZZmWxB6ouzGZbEHaLsxmWxAokvobqfGXa99Oi3qDMPxkrAyFuiW60X7oTwq27geLIyuPcGkzAzM1K+8Zo6d/wBB6Ss8wZdmQE5xe2+ka2ta70Oqq1mpnTUZNAUP2qMnJxNf+ti2+i3H2qX5zgN+KWjaV2vfWLD0OoYffNDNCGmxuKZQlqJptEu/rYtvotx9ql+cfrYtvotx9ql+ciKJ67rj/jOd5yEu/rXtvotx9ql+cfrXtvotx9ql+ciKI7rj8/mO85CXP1rW30W4+1S/OU/WtbfRbj7VL85EkR3XH/Gc7zkJZbpWt+y0rn0vSE87XfT/AOTqrYfBuqpXGpHqdbqdU0knSNOM8JFU7Port9e0Ubsp06jn04AH3zxk0+OMG66EmLUZN8afUlnY+71taD4mnhiMF28qof5vym5ECVmalXI0Z5JZJOU3bfViIidPAlDKxAPn7pCsup2lcDGFqMKq+hxx94M5wNJO6Ztm/wD57pRw8qjUI7/lIT7GEizVNXBPdBGbmhU2XExmWZjMmsiovzGZZmMxYovzGZZmMxYovzGZZmMxYo3W6d/8HvrWqThVqqH/AHHyje5p9DAz5fzJ/wByNsi7sqTk5emOqqjtFRBjJ9IwfXKWqjykW9NKrR0WZXMsBjMplsiHpc2ead3TuAPIr0wrH/i0uHvUr7JwOZP++OwxfWj0hjrF+MonuqqOA9YyPXIAq02RmRgVZSVZTwKsDggzR007jXgUM8KlfiMxmWZjMsWQUX5jMszGYsUX5jMszKjJ4DiTwAHEk90WdouzJB3CuE2a1SterUpNXp0xQXq9TNS1Es5A5DIA4zP3F3F0aLq8TL8Go0G5Iex6g7T3L2TQb8bQ6++qsDkU8U0/dTn7y0zdbqqhtibnsT2atTn+8ukr4fImDY23ra8BNCoGK/KQjS6jvKnjNtIR6OXcbQpBScFXD+KY7fXiTaJSxzclbLPtLRx0ubZF2mrKxESQoCIiAaXevY4vLOvbn5TpqpnzaqnUh9o95nzg6lSVYFWUlWU81YHBHtn1QZCHSvu/8HuRcouKV0TqxyW4AyR6wM+oy1pZ09r6lfUQtbvA4TMZlsS9ZTouzGZbECi7MZlsQKLsxmWxAouzOr6Pt4/gV1pqNi3r4Sr3I+fIqerJB8D4TkonJJSVM7G4uz6eVs8RxB4gjkRLsyK+jvfUKFs7t8KMLbVmPAd1Nz9x9UlEGZ04ODpl6MlJWi/M4LfzccXRa5tQFucfGU+AWvgYDeD4GM9s7vMTkZOLtHZRUlTPmevSem7JUVqbocOjgqynuIM88z6J2xsG1vBi4oo5Awr401FHg44zjrzort2JNK5qIOxXVagHr4GW46iL58CrLA+hE+YzJNXooGeN5w8KXH75tLDoys0INR6tfHYSEU+nTxnp6iBxYJEU7M2dWuqgpUKbVHPMKOCjzmPJR4mS7uhuLSsytatprXI4rwylE/UB5t9Y+qdRYWFG3Tq6NNKVPzaahQT3nvPpmRmV8mdy4LgieGJR4s1u8e0hbWtWrnygulB31W4KP/e6Qc7EliTk5JJ7yeZnY9Iu2uurC3Q5p0M6scmqnmfUOHrM126O7VS+q8itBCDVfs78KfOPumVlbnOl0PtvZeKOj0rzZeG7i/TovV/U6voq2OR1l4455pUc9vnsPDPD1GSXMe0tkpU0p01CIihUUclUchMiWYR2qj5rV6l6nNLI+v5dBERPRWEREATVbw7Hp3ttUt6g8lx5LdqVBxVx4gzayhgHy7tfZ1S1r1Lequl6bEHhwYdjjwI4zDzJ66Q9zxtCkKlIAXdJT1Z5CqnM02/A9hkDVabIzI6lHUlXVhhlYcwRNHFl3rzKWSG1lMxmW5jMkIy7MZluYzALsxmW5jMAuzGZbmMwC7M7fdPpBq2oWjchq9AYCNn42kvcCflDwM4bMZnJRUlTOxk1xR9IbJ21bXia7eqlQfOUHFRD3Mh4rNhqnzHb3D02D03am68nRirD1idbsvpHv6ICuyXCj/eLh/tL+UrSwPoWFlXUm/MpmRra9K1I4620qKe003Vh7GxNlT6TtnEeV8IQ9xpavuMj7OS6HtTi+p2+ZTM449JOzfPrf9O88K3SfYL8hLiof4ap/cZzs5eA3I7jM5/e7bvwSjhONerlaS8yO+oR3D3maDZu/la+rrb2Vn5TcWqVnytKmObsF7B7zwnfUdi0RU650FSvgDrHGSMdiA8FHonjJGUVRPpp41NSmrS6eP7EZ7s7jVrlhWuQ9KiTqw3CrUzxOAeQPeZKuz7Gnb01pUlCIowFHvJ7z4zLlZDDGoci1rNdl1Urm+C5JckIiJ7KYiIgCIiAIiIBScB0g7iLeg3FsFS7UeUvJLhR2HufuPtkgSmJ2MnF2jjSapnyhcUHpu1OojJUQlXRxhlYdhE88z6H3x3Kt9pKWPxVyowldRknuVx85feJBu8O7l1YVClxTKjPkVVy1Nx3q34HjL2PMpepVnjaNVmMyzMrJLPFF2YzLMxmLFF+YzLMxmLFF+YzLMxmLFF+ZTMtzK5ixRdmMyzMZixRdmbTYGxLi+rLRt01E/Lc8EpJ2u7dg8OZm73R3Aur8rUcG3tcgtVdfLde6mh5nxPAePKTlsPYlvZUhRt0CIPlHmzt5zN2mQ5MyjwXMlhivmYm6m7VHZ1EU6Y1O2DVqkeVUf8AAdwm+lYlNtt2yzyERE4BERAEREAREQBERAEREATGvbOnWptTq00qIwwyOoZSPQZkxAIl3l6I1YtUsKujOT8GrElM9yVOa+hs+mRltjYV3aMVuaD08fPK5pn0OOE+psTyrUVdSrorqeBVgGBHoMmjmkufEjeNM+TMxmfQu1ujTZlxkii1u5z5Vs3V8e/Scr7pyV90Mt/s96COxa9Lj9pD+EmWeLI3iZE+YzO8ueibaa/INCp6KhX7xMJujLao/wBnQ+iok9dpHxOdmzkMxmdhT6MNqt+wpr+9VUTY2vRDtB//ALKtvS7/ACmfHsEdpHxHZsj7MZkxbO6GqQwbi8qv3rQRKY9GptRnZ7G3K2daYNK1QuP2lXNWpnv1NnHqnh54rkeliZBuwdytoXmDToFKZ/a1s00A7+IyfUJKu6vRjaWpWrcH4XXGCNa6aKH6tPtPi3sE78DHLgJdIZZZSJFjSLFUDgBgDgAOQEviJEexERAEREAREQBERAEREAREQBERAEREAREQBERAKRiViAUxErEAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQBERAEREAREQD//Z"
-
-st.set_page_config(page_icon = logo2,
- page_title ="Bonsoir !", layout = "wide")
-
-df = pd.read_csv("df_clean2.csv")
-departement_geo = json.load(open("departements.geojson", "r"))
-
-liste_dep = sorted(df.NomDept.unique().tolist())
-liste_famille = df.famille.unique().tolist()
-liste_metier = list(df.metier.unique())
-
-
-dico_map = {}
-for feature in departement_geo["features"]:
- feature['id']=feature['properties']['code']
- dico_map[feature['properties']['nom']] = feature['id']
-
-
-def heatmap(dep):
- departement = df[df.NomDept == dep]
-
- dep_tail = departement.groupby(["metier"]).agg({"Nbr_demande":"sum"}).sort_values(by="Nbr_demande", ascending = True).head(10)
- labels_tail = dep_tail.index.values.tolist()
-
- dep_head = departement.groupby(["metier"]).agg({"Nbr_demande":"sum"}).sort_values(by="Nbr_demande", ascending = True).tail(10)
- labels_head = dep_head.index.values.tolist()
-
-
- sns.set()
- dep_head.reset_index(inplace=True)
- dep_head = dep_head.sort_values("Nbr_demande", ascending = False)
- dep_head.columns = ["metier", "nbr_demande"]
-
- dep_tail.reset_index(inplace=True)
- dep_tail = dep_tail.sort_values("Nbr_demande", ascending = False)
- dep_tail.columns = ["metier", "nbr_demande"]
-
-
- fig1= plt.figure()
- sns.barplot(y= "metier", x= "nbr_demande", data = dep_head,
- orient="h", palette ="Reds_r")
- plt.xlabel("")
- plt.title("Les métier les plus demandés", fontsize= 18)
- plt.ylabel("")
-
- st.pyplot(fig1)
-
- fig2= plt.figure()
- sns.barplot(y= "metier", x= "nbr_demande", data = dep_tail, orient="h", palette ="Blues")
- plt.xlabel("")
- plt.title("Les métier les moins demandés", fontsize= 18)
- plt.ylabel("")
- plt.xlim(0,50)
-
- st.pyplot(fig2)
-
-def demande_metier(metier):
-
- df_metier = df[df.metier == metier]
- choro = df_metier.groupby(by=["NomDept"]).agg({"Nbr_demande":"sum"})
- choro = choro.reset_index()
- choro['id']=choro['NomDept'].apply(lambda x: dico_map[x])
-
-
- fig = px.choropleth_mapbox(choro, width = 900, height =100, locations="id", geojson = departement_geo, color = "Nbr_demande", hover_name = "NomDept",
- mapbox_style = "open-street-map",
- center = {"lat":46.80, "lon":3.02}, zoom = 5, opacity = 0.5,
- title = metier)
-
- fig.update_geos(fitbounds = "locations", visible = False)
- fig.update_layout(height=800, title_font_size = 25)
-
- st.plotly_chart(fig)
-
-def departement_page():
-
- dep = st.selectbox("Choisir un département",liste_dep)
- heatmap(dep)
-
-
-
-def metier_page():
-
-
- famille = st.selectbox("Famille de métier",liste_famille)
- liste_metier = df[df.famille == famille]["metier"].unique().tolist()
- metier = st.selectbox("Choisir un métier", liste_metier)
-
- demande_metier(metier)
-
-
-def contact_message():
- st.header(":mailbox: Let's Get In Touch !")
-
- name, message = st.columns((1,2))
- with name:
- contact_form = """"""
- st.markdown(contact_form, unsafe_allow_html=True)
-
- with message :
- contact_form2 = """
-
- """)
-
- openai_api_key_textbox = gr.Textbox(placeholder="Paste your OpenAI API key (sk-...) and hit Enter",
- show_label=False, lines=1, type='password')
-
- with gr.Row():
- with gr.Column(scale=1, min_width=TALKING_HEAD_WIDTH, visible=True):
- # speak_text_cb = gr.Checkbox(label="Enable speech", value=False)
- # speak_text_cb.change(update_foo, inputs=[speak_text_cb, speak_text_state],
- # outputs=[speak_text_state])
-
- my_file = gr.File(label="Upload a file", type="file", visible=False)
- tmp_file = gr.File(LOOPING_TALKING_HEAD, visible=False)
- # tmp_file_url = "/file=" + tmp_file.value['name']
- htm_video = create_html_video(LOOPING_TALKING_HEAD, TALKING_HEAD_WIDTH)
- video_html = gr.HTML(htm_video)
-
- # my_aud_file = gr.File(label="Audio file", type="file", visible=True)
- tmp_aud_file = gr.File("audios/tempfile.mp3", visible=False)
- tmp_aud_file_url = "/file=" + tmp_aud_file.value['name']
- htm_audio = f' '
- audio_html = gr.HTML(htm_audio)
-
- with gr.Column(scale=7):
- chatbot = gr.Chatbot()
-
- with gr.Row():
- message = gr.Textbox(label="What's on your mind??",
- placeholder="What's the answer to life, the universe, and everything?",
- lines=1)
- submit = gr.Button(value="Send", variant="secondary").style(full_width=False)
-
- # UNCOMMENT TO USE WHISPER
- with gr.Row():
- audio_comp = gr.Microphone(source="microphone", type="filepath", label="Just say it!",
- interactive=True, streaming=False)
- audio_comp.change(transcribe, inputs=[audio_comp, whisper_lang_state], outputs=[message])
-
- # TEMPORARY FOR TESTING
- # with gr.Row():
- # audio_comp_tb = gr.Textbox(label="Just say it!", lines=1)
- # audio_comp_tb.submit(transcribe_dummy, inputs=[audio_comp_tb, whisper_lang_state], outputs=[message])
-
- gr.Examples(
- examples=["How many people live in Canada?",
- "What is 2 to the 30th power?",
- "If x+y=10 and x-y=4, what are x and y?",
- "How much did it rain in SF today?",
- "Get me information about the movie 'Avatar'",
- "What are the top tech headlines in the US?",
- "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses - "
- "if I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"],
- inputs=message
- )
-
- # with gr.Tab("Settings"):
- # tools_cb_group = gr.CheckboxGroup(label="Tools:", choices=TOOLS_LIST,
- # value=TOOLS_DEFAULT_LIST)
- # tools_cb_group.change(update_selected_tools,
- # inputs=[tools_cb_group, tools_list_state, llm_state],
- # outputs=[tools_list_state, llm_state, chain_state, express_chain_state])
-
- # trace_chain_cb = gr.Checkbox(label="Show reasoning chain in chat bubble", value=False)
- # trace_chain_cb.change(update_foo, inputs=[trace_chain_cb, trace_chain_state],
- # outputs=[trace_chain_state])
-
- # force_translate_cb = gr.Checkbox(label="Force translation to selected Output Language",
- # value=FORCE_TRANSLATE_DEFAULT)
- # force_translate_cb.change(update_foo, inputs=[force_translate_cb, force_translate_state],
- # outputs=[force_translate_state])
-
- # # speak_text_cb = gr.Checkbox(label="Speak text from agent", value=False)
- # # speak_text_cb.change(update_foo, inputs=[speak_text_cb, speak_text_state],
- # # outputs=[speak_text_state])
-
- # talking_head_cb = gr.Checkbox(label="Show talking head", value=True)
- # talking_head_cb.change(update_talking_head, inputs=[talking_head_cb, talking_head_state],
- # outputs=[talking_head_state, video_html])
-
- # monologue_cb = gr.Checkbox(label="Babel fish mode (translate/restate what you enter, no conversational agent)",
- # value=False)
- # monologue_cb.change(update_foo, inputs=[monologue_cb, monologue_state],
- # outputs=[monologue_state])
-
- # use_gpt4_cb = gr.Checkbox(label="Use GPT-4 (experimental) if your OpenAI API has access to it",
- # value=USE_GPT4_DEFAULT)
- # use_gpt4_cb.change(set_openai_api_key,
- # inputs=[openai_api_key_textbox, use_gpt4_cb],
- # outputs=[chain_state, express_chain_state, llm_state, embeddings_state,
- # qa_chain_state, memory_state, use_gpt4_state])
-
- # reset_btn = gr.Button(value="Reset chat", variant="secondary").style(full_width=False)
- # reset_btn.click(reset_memory, inputs=[history_state, memory_state],
- # outputs=[chatbot, history_state, memory_state])
-
- # with gr.Tab("Whisper STT"):
- # whisper_lang_radio = gr.Radio(label="Whisper speech-to-text language:", choices=[
- # WHISPER_DETECT_LANG, "Arabic", "Arabic (Gulf)", "Catalan", "Chinese (Cantonese)", "Chinese (Mandarin)",
- # "Danish", "Dutch", "English (Australian)", "English (British)", "English (Indian)", "English (New Zealand)",
- # "English (South African)", "English (US)", "English (Welsh)", "Finnish", "French", "French (Canadian)",
- # "German", "German (Austrian)", "Georgian", "Hindi", "Icelandic", "Indonesian", "Italian", "Japanese",
- # "Korean", "Norwegian", "Polish",
- # "Portuguese (Brazilian)", "Portuguese (European)", "Romanian", "Russian", "Spanish (European)",
- # "Spanish (Mexican)", "Spanish (US)", "Swedish", "Turkish", "Ukrainian", "Welsh"],
- # value=WHISPER_DETECT_LANG)
-
- # whisper_lang_radio.change(update_foo,
- # inputs=[whisper_lang_radio, whisper_lang_state],
- # outputs=[whisper_lang_state])
-
- # with gr.Tab("Output Language"):
- # lang_level_radio = gr.Radio(label="Language level:", choices=[
- # LANG_LEVEL_DEFAULT, "1st grade", "2nd grade", "3rd grade", "4th grade", "5th grade", "6th grade",
- # "7th grade", "8th grade", "9th grade", "10th grade", "11th grade", "12th grade", "University"],
- # value=LANG_LEVEL_DEFAULT)
- # lang_level_radio.change(update_foo, inputs=[lang_level_radio, lang_level_state],
- # outputs=[lang_level_state])
-
- # translate_to_radio = gr.Radio(label="Language:", choices=[
- # TRANSLATE_TO_DEFAULT, "Arabic", "Arabic (Gulf)", "Catalan", "Chinese (Cantonese)", "Chinese (Mandarin)",
- # "Danish", "Dutch", "English (Australian)", "English (British)", "English (Indian)", "English (New Zealand)",
- # "English (South African)", "English (US)", "English (Welsh)", "Finnish", "French", "French (Canadian)",
- # "German", "German (Austrian)", "Georgian", "Hindi", "Icelandic", "Indonesian", "Italian", "Japanese",
- # "Korean", "Norwegian", "Polish",
- # "Portuguese (Brazilian)", "Portuguese (European)", "Romanian", "Russian", "Spanish (European)",
- # "Spanish (Mexican)", "Spanish (US)", "Swedish", "Turkish", "Ukrainian", "Welsh",
- # "emojis", "Gen Z slang", "how the stereotypical Karen would say it", "Klingon", "Neanderthal",
- # "Pirate", "Strange Planet expospeak technical talk", "Yoda"],
- # value=TRANSLATE_TO_DEFAULT)
-
- # translate_to_radio.change(update_foo,
- # inputs=[translate_to_radio, translate_to_state],
- # outputs=[translate_to_state])
-
- # with gr.Tab("Formality"):
- # formality_radio = gr.Radio(label="Formality:",
- # choices=[FORMALITY_DEFAULT, "Casual", "Polite", "Honorific"],
- # value=FORMALITY_DEFAULT)
- # formality_radio.change(update_foo,
- # inputs=[formality_radio, formality_state],
- # outputs=[formality_state])
-
- # with gr.Tab("Lit Style"):
- # literary_style_radio = gr.Radio(label="Literary style:", choices=[
- # LITERARY_STYLE_DEFAULT, "Prose", "Story", "Summary", "Outline", "Bullets", "Poetry", "Haiku", "Limerick",
- # "Rap",
- # "Joke", "Knock-knock", "FAQ"],
- # value=LITERARY_STYLE_DEFAULT)
-
- # literary_style_radio.change(update_foo,
- # inputs=[literary_style_radio, literary_style_state],
- # outputs=[literary_style_state])
-
- # with gr.Tab("Emotions"):
- # anticipation_level_radio = gr.Radio(label="Anticipation level:",
- # choices=[EMOTION_DEFAULT, "Interest", "Anticipation", "Vigilance"],
- # value=EMOTION_DEFAULT)
- # anticipation_level_radio.change(update_foo,
- # inputs=[anticipation_level_radio, anticipation_level_state],
- # outputs=[anticipation_level_state])
-
- # joy_level_radio = gr.Radio(label="Joy level:",
- # choices=[EMOTION_DEFAULT, "Serenity", "Joy", "Ecstasy"],
- # value=EMOTION_DEFAULT)
- # joy_level_radio.change(update_foo,
- # inputs=[joy_level_radio, joy_level_state],
- # outputs=[joy_level_state])
-
- # trust_level_radio = gr.Radio(label="Trust level:",
- # choices=[EMOTION_DEFAULT, "Acceptance", "Trust", "Admiration"],
- # value=EMOTION_DEFAULT)
- # trust_level_radio.change(update_foo,
- # inputs=[trust_level_radio, trust_level_state],
- # outputs=[trust_level_state])
-
- # fear_level_radio = gr.Radio(label="Fear level:",
- # choices=[EMOTION_DEFAULT, "Apprehension", "Fear", "Terror"],
- # value=EMOTION_DEFAULT)
- # fear_level_radio.change(update_foo,
- # inputs=[fear_level_radio, fear_level_state],
- # outputs=[fear_level_state])
-
- # surprise_level_radio = gr.Radio(label="Surprise level:",
- # choices=[EMOTION_DEFAULT, "Distraction", "Surprise", "Amazement"],
- # value=EMOTION_DEFAULT)
- # surprise_level_radio.change(update_foo,
- # inputs=[surprise_level_radio, surprise_level_state],
- # outputs=[surprise_level_state])
-
- # sadness_level_radio = gr.Radio(label="Sadness level:",
- # choices=[EMOTION_DEFAULT, "Pensiveness", "Sadness", "Grief"],
- # value=EMOTION_DEFAULT)
- # sadness_level_radio.change(update_foo,
- # inputs=[sadness_level_radio, sadness_level_state],
- # outputs=[sadness_level_state])
-
- # disgust_level_radio = gr.Radio(label="Disgust level:",
- # choices=[EMOTION_DEFAULT, "Boredom", "Disgust", "Loathing"],
- # value=EMOTION_DEFAULT)
- # disgust_level_radio.change(update_foo,
- # inputs=[disgust_level_radio, disgust_level_state],
- # outputs=[disgust_level_state])
-
- # anger_level_radio = gr.Radio(label="Anger level:",
- # choices=[EMOTION_DEFAULT, "Annoyance", "Anger", "Rage"],
- # value=EMOTION_DEFAULT)
- # anger_level_radio.change(update_foo,
- # inputs=[anger_level_radio, anger_level_state],
- # outputs=[anger_level_state])
-
- # with gr.Tab("Max Words"):
- # num_words_slider = gr.Slider(label="Max number of words to generate (0 for don't care)",
- # value=NUM_WORDS_DEFAULT, minimum=0, maximum=MAX_WORDS, step=10)
- # num_words_slider.change(update_foo,
- # inputs=[num_words_slider, num_words_state],
- # outputs=[num_words_state])
-
- # with gr.Tab("Embeddings"):
- # embeddings_text_box = gr.Textbox(label="Enter text for embeddings and hit Create:",
- # lines=20)
-
- # with gr.Row():
- # use_embeddings_cb = gr.Checkbox(label="Use embeddings", value=False)
- # use_embeddings_cb.change(update_use_embeddings, inputs=[use_embeddings_cb, use_embeddings_state],
- # outputs=[use_embeddings_state])
-
- # embeddings_text_submit = gr.Button(value="Create", variant="secondary").style(full_width=False)
- # embeddings_text_submit.click(update_embeddings,
- # inputs=[embeddings_text_box, embeddings_state, qa_chain_state],
- # outputs=[docsearch_state])
-
- # gr.HTML("""
- # This application, developed by James L. Weaver ,
- # demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain.
- # When necessary, it leverages tools for complex math, searching the internet, and accessing news and weather.
- # Uses talking heads from Ex-Human .
- # For faster inference without waiting in queue, you may duplicate the space.
- #
""")
-
-# gr.HTML("""
-#
-#
-#
-#
-#
-#
-#
-#
-# """)
-
-# gr.HTML("""
-#
-#
-# Powered by LangChain 🦜️🔗
-# """)
-
- message.submit(chat, inputs=[openai_api_key_textbox, message, history_state, chain_state, trace_chain_state,
- speak_text_state, talking_head_state, monologue_state,
- express_chain_state, num_words_state, formality_state,
- anticipation_level_state, joy_level_state, trust_level_state, fear_level_state,
- surprise_level_state, sadness_level_state, disgust_level_state, anger_level_state,
- lang_level_state, translate_to_state, literary_style_state,
- qa_chain_state, docsearch_state, use_embeddings_state,
- force_translate_state],
- outputs=[chatbot, history_state, video_html, my_file, audio_html, tmp_aud_file, message])
-
- submit.click(chat, inputs=[openai_api_key_textbox, message, history_state, chain_state, trace_chain_state,
- speak_text_state, talking_head_state, monologue_state,
- express_chain_state, num_words_state, formality_state,
- anticipation_level_state, joy_level_state, trust_level_state, fear_level_state,
- surprise_level_state, sadness_level_state, disgust_level_state, anger_level_state,
- lang_level_state, translate_to_state, literary_style_state,
- qa_chain_state, docsearch_state, use_embeddings_state,
- force_translate_state],
- outputs=[chatbot, history_state, video_html, my_file, audio_html, tmp_aud_file, message])
-
- openai_api_key_textbox.change(set_openai_api_key,
- inputs=[openai_api_key_textbox, use_gpt4_state],
- outputs=[chain_state, express_chain_state, llm_state, embeddings_state,
- qa_chain_state, memory_state, use_gpt4_state])
- openai_api_key_textbox.submit(set_openai_api_key,
- inputs=[openai_api_key_textbox, use_gpt4_state],
- outputs=[chain_state, express_chain_state, llm_state, embeddings_state,
- qa_chain_state, memory_state, use_gpt4_state])
-
-block.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/rope.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/rope.py
deleted file mode 100644
index 503e6748df2bb72b3c864c20b37cba5498ffdd21..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/rope.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from torch import nn
-import torch
-
-
-class XPos(nn.Module):
- """Length-extrapolatable positional embedding (xPos) from [Sun et al 2022](https://arxiv.org/abs/2212.10554v1).
- This applies an exponential decay to the RoPE rotation matrix.
-
- Args:
- dim (int): Embedding dimension.
- smoothing (float): Smoothing factor applied to the decay rates.
- base_scale (int): Base decay rate, given in terms of scaling time.
- device (torch.device, optional): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, smoothing: float = 0.4, base_scale: int = 512,
- device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
- self.base_scale = base_scale
-
- half_dim = dim // 2
- adim = torch.arange(half_dim, device=device, dtype=dtype)
- decay_rates = (adim / half_dim + smoothing) / (1.0 + smoothing)
- self.register_buffer("decay_rates", decay_rates)
- self.decay: tp.Optional[torch.Tensor] = None
-
- def get_decay(self, start: int, end: int):
- """Create complex decay tensor, cache values for fast computation."""
- if self.decay is None or end > self.decay.shape[0]:
- assert isinstance(self.decay_rates, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.decay_rates.device, dtype=self.dtype)
- power = idx / self.base_scale
- scale = self.decay_rates ** power.unsqueeze(-1)
- self.decay = torch.polar(scale, torch.zeros_like(scale))
- return self.decay[start:end] # [T, C/2]
-
-
-class RotaryEmbedding(nn.Module):
- """Rotary positional embedding (RoPE) from [Su et al 2022](https://arxiv.org/abs/2104.09864).
-
- Args:
- dim (int): Embedding dimension (twice the number of frequencies).
- max_period (float): Maximum period of the rotation frequencies.
- xpos (bool): Use xPos, applies an exponential decay to rotation matrix.
- scale (float): Scale of positional embedding, set to 0 to deactivate.
- device (torch.device, optional): Device on which to initialize the module.
- dtype (torch.dtype): dtype to use to generate the embedding.
- """
- def __init__(self, dim: int, max_period: float = 10000.0, xpos: bool = False,
- scale: float = 1.0, device=None, dtype: torch.dtype = torch.float32):
- super().__init__()
- assert dim % 2 == 0
- self.scale = scale
- assert dtype in [torch.float64, torch.float32]
- self.dtype = dtype
-
- adim = torch.arange(0, dim, 2, device=device, dtype=dtype)[: (dim // 2)]
- frequencies = 1.0 / (max_period ** (adim / dim))
- self.register_buffer("frequencies", frequencies)
- self.rotation: tp.Optional[torch.Tensor] = None
-
- self.xpos = XPos(dim, device=device, dtype=dtype) if xpos else None
-
- def get_rotation(self, start: int, end: int):
- """Create complex rotation tensor, cache values for fast computation."""
- if self.rotation is None or end > self.rotation.shape[0]:
- assert isinstance(self.frequencies, torch.Tensor) # Satisfy type checker.
- idx = torch.arange(end, device=self.frequencies.device, dtype=self.dtype)
- angles = torch.outer(idx, self.frequencies)
- self.rotation = torch.polar(torch.ones_like(angles), angles)
- return self.rotation[start:end]
-
- def rotate(self, x: torch.Tensor, start: int = 0, invert_decay: bool = False):
- """Apply rope rotation to query or key tensor."""
- T = x.shape[1]
- rotation = self.get_rotation(start, start + T).unsqueeze(0).unsqueeze(2)
-
- if self.xpos:
- decay = self.xpos.get_decay(start, start + T).unsqueeze(0).unsqueeze(2)
- else:
- decay = 1.0
-
- if invert_decay:
- decay = decay ** -1
-
- x_complex = torch.view_as_complex(x.to(self.dtype).reshape(*x.shape[:-1], -1, 2))
- scaled_rotation = (rotation * decay) * self.scale + (1.0 - self.scale)
- x_out = torch.view_as_real(x_complex * scaled_rotation).flatten(-2)
-
- return x_out.type_as(x)
-
- def rotate_qk(self, query: torch.Tensor, key: torch.Tensor, start: int = 0):
- """ Apply rope rotation to both query and key tensors.
- Supports streaming mode, in which query and key are not expected to have the same shape.
- In streaming mode, key will be of length [P + C] with P the cached past timesteps, but
- query will be [C] (typically C == 1).
-
- Args:
- query (torch.Tensor): Query to rotate.
- key (torch.Tensor): Key to rotate.
- start (int): Start index of the sequence for time offset.
- """
- query_timesteps = query.shape[1]
- key_timesteps = key.shape[1]
- streaming_offset = key_timesteps - query_timesteps
-
- query_out = self.rotate(query, start + streaming_offset)
- key_out = self.rotate(key, start, invert_decay=True)
-
- return query_out, key_out
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/losses/__init__.py b/spaces/SuYuanS/AudioCraft_Plus/tests/losses/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/losses/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_rope.py b/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_rope.py
deleted file mode 100644
index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/tests/modules/test_rope.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.modules.rope import RotaryEmbedding
-from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend
-
-
-def test_rope():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_rope_io_dtypes():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32)
- rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64)
-
- # Test bfloat16 inputs w/ both 32 and 64 precision rope.
- xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
- xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
-
- # Test float32 inputs w/ both 32 and 64 precision rope.
- xq_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xk_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
- xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
-
-
-def test_transformer_with_rope():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- for pos in ['rope', 'sin_rope']:
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding=pos)
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- out = tr(x)
- assert list(out.shape) == list(x.shape)
-
-
-@torch.no_grad()
-def test_rope_streaming():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, causal=True, dropout=0.,
- custom=True, positional_embedding='rope')
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-@torch.no_grad()
-def test_rope_streaming_past_context():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
-
- for context in [None, 10]:
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=True,
- dropout=0., positional_embedding='rope')
- tr.eval()
-
- steps = 20
- x = torch.randn(3, steps, 16)
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_rope_memory_efficient():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- # Check at float precision b/c this is the rope default.
- assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm()
-
-
-def test_rope_with_xpos():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_positional_scale():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert torch.allclose(xq, xq_out)
- assert torch.allclose(xk, xk_out)
diff --git a/spaces/Sumit7864/Image-Enhancer/docs/FAQ.md b/spaces/Sumit7864/Image-Enhancer/docs/FAQ.md
deleted file mode 100644
index 843f4dd847487066a1c7c105c7292e2de0bd5f1a..0000000000000000000000000000000000000000
--- a/spaces/Sumit7864/Image-Enhancer/docs/FAQ.md
+++ /dev/null
@@ -1,10 +0,0 @@
-# FAQ
-
-1. **Q: How to select models?**
-A: Please refer to [docs/model_zoo.md](docs/model_zoo.md)
-
-1. **Q: Can `face_enhance` be used for anime images/animation videos?**
-A: No, it can only be used for real faces. It is recommended not to use this option for anime images/animation videos to save GPU memory.
-
-1. **Q: Error "slow_conv2d_cpu" not implemented for 'Half'**
-A: In order to save GPU memory consumption and speed up inference, Real-ESRGAN uses half precision (fp16) during inference by default. However, some operators for half inference are not implemented in CPU mode. You need to add **`--fp32` option** for the commands. For example, `python inference_realesrgan.py -n RealESRGAN_x4plus.pth -i inputs --fp32`.
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_pylabtools.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_pylabtools.py
deleted file mode 100644
index dd1a0ff58b5396466f5328ad91ba769c67291c8c..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_pylabtools.py
+++ /dev/null
@@ -1,270 +0,0 @@
-"""Tests for pylab tools module.
-"""
-
-# Copyright (c) IPython Development Team.
-# Distributed under the terms of the Modified BSD License.
-
-
-from binascii import a2b_base64
-from io import BytesIO
-
-import pytest
-
-matplotlib = pytest.importorskip("matplotlib")
-matplotlib.use('Agg')
-from matplotlib.figure import Figure
-
-from matplotlib import pyplot as plt
-from matplotlib_inline import backend_inline
-import numpy as np
-
-from IPython.core.getipython import get_ipython
-from IPython.core.interactiveshell import InteractiveShell
-from IPython.core.display import _PNG, _JPEG
-from .. import pylabtools as pt
-
-from IPython.testing import decorators as dec
-
-
-def test_figure_to_svg():
- # simple empty-figure test
- fig = plt.figure()
- assert pt.print_figure(fig, "svg") is None
-
- plt.close('all')
-
- # simple check for at least svg-looking output
- fig = plt.figure()
- ax = fig.add_subplot(1,1,1)
- ax.plot([1,2,3])
- plt.draw()
- svg = pt.print_figure(fig, "svg")[:100].lower()
- assert "doctype svg" in svg
-
-
-def _check_pil_jpeg_bytes():
- """Skip if PIL can't write JPEGs to BytesIO objects"""
- # PIL's JPEG plugin can't write to BytesIO objects
- # Pillow fixes this
- from PIL import Image
- buf = BytesIO()
- img = Image.new("RGB", (4,4))
- try:
- img.save(buf, 'jpeg')
- except Exception as e:
- ename = e.__class__.__name__
- raise pytest.skip("PIL can't write JPEG to BytesIO: %s: %s" % (ename, e)) from e
-
-@dec.skip_without("PIL.Image")
-def test_figure_to_jpeg():
- _check_pil_jpeg_bytes()
- # simple check for at least jpeg-looking output
- fig = plt.figure()
- ax = fig.add_subplot(1,1,1)
- ax.plot([1,2,3])
- plt.draw()
- jpeg = pt.print_figure(fig, 'jpeg', pil_kwargs={'optimize': 50})[:100].lower()
- assert jpeg.startswith(_JPEG)
-
-def test_retina_figure():
- # simple empty-figure test
- fig = plt.figure()
- assert pt.retina_figure(fig) == None
- plt.close('all')
-
- fig = plt.figure()
- ax = fig.add_subplot(1,1,1)
- ax.plot([1,2,3])
- plt.draw()
- png, md = pt.retina_figure(fig)
- assert png.startswith(_PNG)
- assert "width" in md
- assert "height" in md
-
-
-_fmt_mime_map = {
- 'png': 'image/png',
- 'jpeg': 'image/jpeg',
- 'pdf': 'application/pdf',
- 'retina': 'image/png',
- 'svg': 'image/svg+xml',
-}
-
-def test_select_figure_formats_str():
- ip = get_ipython()
- for fmt, active_mime in _fmt_mime_map.items():
- pt.select_figure_formats(ip, fmt)
- for mime, f in ip.display_formatter.formatters.items():
- if mime == active_mime:
- assert Figure in f
- else:
- assert Figure not in f
-
-def test_select_figure_formats_kwargs():
- ip = get_ipython()
- kwargs = dict(bbox_inches="tight")
- pt.select_figure_formats(ip, "png", **kwargs)
- formatter = ip.display_formatter.formatters["image/png"]
- f = formatter.lookup_by_type(Figure)
- cell = f.keywords
- expected = kwargs
- expected["base64"] = True
- expected["fmt"] = "png"
- assert cell == expected
-
- # check that the formatter doesn't raise
- fig = plt.figure()
- ax = fig.add_subplot(1,1,1)
- ax.plot([1,2,3])
- plt.draw()
- formatter.enabled = True
- png = formatter(fig)
- assert isinstance(png, str)
- png_bytes = a2b_base64(png)
- assert png_bytes.startswith(_PNG)
-
-def test_select_figure_formats_set():
- ip = get_ipython()
- for fmts in [
- {'png', 'svg'},
- ['png'],
- ('jpeg', 'pdf', 'retina'),
- {'svg'},
- ]:
- active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
- pt.select_figure_formats(ip, fmts)
- for mime, f in ip.display_formatter.formatters.items():
- if mime in active_mimes:
- assert Figure in f
- else:
- assert Figure not in f
-
-def test_select_figure_formats_bad():
- ip = get_ipython()
- with pytest.raises(ValueError):
- pt.select_figure_formats(ip, 'foo')
- with pytest.raises(ValueError):
- pt.select_figure_formats(ip, {'png', 'foo'})
- with pytest.raises(ValueError):
- pt.select_figure_formats(ip, ['retina', 'pdf', 'bar', 'bad'])
-
-def test_import_pylab():
- ns = {}
- pt.import_pylab(ns, import_all=False)
- assert "plt" in ns
- assert ns["np"] == np
-
-
-class TestPylabSwitch(object):
- class Shell(InteractiveShell):
- def init_history(self):
- """Sets up the command history, and starts regular autosaves."""
- self.config.HistoryManager.hist_file = ":memory:"
- super().init_history()
-
- def enable_gui(self, gui):
- pass
-
- def setup(self):
- import matplotlib
- def act_mpl(backend):
- matplotlib.rcParams['backend'] = backend
-
- # Save rcParams since they get modified
- self._saved_rcParams = matplotlib.rcParams
- self._saved_rcParamsOrig = matplotlib.rcParamsOrig
- matplotlib.rcParams = dict(backend='Qt4Agg')
- matplotlib.rcParamsOrig = dict(backend='Qt4Agg')
-
- # Mock out functions
- self._save_am = pt.activate_matplotlib
- pt.activate_matplotlib = act_mpl
- self._save_ip = pt.import_pylab
- pt.import_pylab = lambda *a,**kw:None
- self._save_cis = backend_inline.configure_inline_support
- backend_inline.configure_inline_support = lambda *a, **kw: None
-
- def teardown(self):
- pt.activate_matplotlib = self._save_am
- pt.import_pylab = self._save_ip
- backend_inline.configure_inline_support = self._save_cis
- import matplotlib
- matplotlib.rcParams = self._saved_rcParams
- matplotlib.rcParamsOrig = self._saved_rcParamsOrig
-
- def test_qt(self):
- s = self.Shell()
- gui, backend = s.enable_matplotlib(None)
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
- gui, backend = s.enable_matplotlib("inline")
- assert gui == "inline"
- assert s.pylab_gui_select == "qt"
-
- gui, backend = s.enable_matplotlib("qt")
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
- gui, backend = s.enable_matplotlib("inline")
- assert gui == "inline"
- assert s.pylab_gui_select == "qt"
-
- gui, backend = s.enable_matplotlib()
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
- def test_inline(self):
- s = self.Shell()
- gui, backend = s.enable_matplotlib("inline")
- assert gui == "inline"
- assert s.pylab_gui_select == None
-
- gui, backend = s.enable_matplotlib("inline")
- assert gui == "inline"
- assert s.pylab_gui_select == None
-
- gui, backend = s.enable_matplotlib("qt")
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
- def test_inline_twice(self):
- "Using '%matplotlib inline' twice should not reset formatters"
-
- ip = self.Shell()
- gui, backend = ip.enable_matplotlib("inline")
- assert gui == "inline"
-
- fmts = {'png'}
- active_mimes = {_fmt_mime_map[fmt] for fmt in fmts}
- pt.select_figure_formats(ip, fmts)
-
- gui, backend = ip.enable_matplotlib("inline")
- assert gui == "inline"
-
- for mime, f in ip.display_formatter.formatters.items():
- if mime in active_mimes:
- assert Figure in f
- else:
- assert Figure not in f
-
- def test_qt_gtk(self):
- s = self.Shell()
- gui, backend = s.enable_matplotlib("qt")
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
- gui, backend = s.enable_matplotlib("gtk")
- assert gui == "qt"
- assert s.pylab_gui_select == "qt"
-
-
-def test_no_gui_backends():
- for k in ['agg', 'svg', 'pdf', 'ps']:
- assert k not in pt.backend2gui
-
-
-def test_figure_no_canvas():
- fig = Figure()
- fig.canvas = None
- pt.print_figure(fig)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/video.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/video.py
deleted file mode 100644
index fad4a0e843a401ff081552269181620708979679..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/video.py
+++ /dev/null
@@ -1,121 +0,0 @@
-from typing import TYPE_CHECKING, Any, Optional, Type, TypeVar, Union
-
-import numpy as np
-
-from docarray.base_doc import BaseDoc
-from docarray.documents import AudioDoc
-from docarray.typing import AnyEmbedding, AnyTensor, VideoBytes
-from docarray.typing.tensor.abstract_tensor import AbstractTensor
-from docarray.typing.tensor.video.video_tensor import VideoTensor
-from docarray.typing.url.video_url import VideoUrl
-from docarray.utils._internal.misc import import_library
-
-if TYPE_CHECKING:
- import tensorflow as tf # type: ignore
- import torch
-else:
- tf = import_library('tensorflow', raise_error=False)
- torch = import_library('torch', raise_error=False)
-
-
-T = TypeVar('T', bound='VideoDoc')
-
-
-class VideoDoc(BaseDoc):
- """
- Document for handling video.
-
- The Video Document can contain:
-
- - a [`VideoUrl`][docarray.typing.url.VideoUrl] (`VideoDoc.url`)
- - an [`AudioDoc`][docarray.documents.AudioDoc] (`VideoDoc.audio`)
- - a [`VideoTensor`](../../../api_references/typing/tensor/video) (`VideoDoc.tensor`)
- - an [`AnyTensor`](../../../api_references/typing/tensor/tensor) representing the indices of the video's key frames (`VideoDoc.key_frame_indices`)
- - an [`AnyEmbedding`](../../../api_references/typing/tensor/embedding) (`VideoDoc.embedding`)
- - a [`VideoBytes`][docarray.typing.bytes.VideoBytes] object (`VideoDoc.bytes_`)
-
- You can use this Document directly:
-
- ```python
- from docarray.documents import VideoDoc
-
- # use it directly
- vid = VideoDoc(
- url='https://github.com/docarray/docarray/blob/main/tests/toydata/mov_bbb.mp4?raw=true'
- )
- vid.tensor, vid.audio.tensor, vid.key_frame_indices = vid.url.load()
- # model = MyEmbeddingModel()
- # vid.embedding = model(vid.tensor)
- ```
-
- You can extend this Document:
-
- ```python
- from typing import Optional
-
- from docarray.documents import TextDoc, VideoDoc
-
-
- # extend it
- class MyVideo(VideoDoc):
- name: Optional[TextDoc]
-
-
- video = MyVideo(
- url='https://github.com/docarray/docarray/blob/main/tests/toydata/mov_bbb.mp4?raw=true'
- )
- video.name = TextDoc(text='my first video')
- video.tensor = video.url.load().video
- # model = MyEmbeddingModel()
- # video.embedding = model(video.tensor)
- ```
-
- You can use this Document for composition:
-
- ```python
- from docarray import BaseDoc
- from docarray.documents import TextDoc, VideoDoc
-
-
- # compose it
- class MultiModalDoc(BaseDoc):
- video: VideoDoc
- text: TextDoc
-
-
- mmdoc = MultiModalDoc(
- video=VideoDoc(
- url='https://github.com/docarray/docarray/blob/main/tests/toydata/mov_bbb.mp4?raw=true'
- ),
- text=TextDoc(text='hello world, how are you doing?'),
- )
- mmdoc.video.tensor = mmdoc.video.url.load().video
-
- # or
- mmdoc.video.bytes_ = mmdoc.video.url.load_bytes()
- mmdoc.video.tensor = mmdoc.video.bytes_.load().video
- ```
- """
-
- url: Optional[VideoUrl]
- audio: Optional[AudioDoc] = AudioDoc()
- tensor: Optional[VideoTensor]
- key_frame_indices: Optional[AnyTensor]
- embedding: Optional[AnyEmbedding]
- bytes_: Optional[VideoBytes]
-
- @classmethod
- def validate(
- cls: Type[T],
- value: Union[str, AbstractTensor, Any],
- ) -> T:
- if isinstance(value, str):
- value = cls(url=value)
- elif isinstance(value, (AbstractTensor, np.ndarray)) or (
- torch is not None
- and isinstance(value, torch.Tensor)
- or (tf is not None and isinstance(value, tf.Tensor))
- ):
- value = cls(tensor=value)
-
- return super().validate(value)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/_typing.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/_typing.py
deleted file mode 100644
index a008d562144becb33f7b0fe53a7df3ec192f308d..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/utils/_internal/_typing.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from typing import Any, Optional
-
-from typing_extensions import get_origin
-from typing_inspect import get_args, is_typevar, is_union_type
-
-from docarray.typing.tensor.abstract_tensor import AbstractTensor
-
-
-def is_type_tensor(type_: Any) -> bool:
- """Return True if type is a type Tensor or an Optional Tensor type."""
- return isinstance(type_, type) and issubclass(type_, AbstractTensor)
-
-
-def is_tensor_union(type_: Any) -> bool:
- """Return True if type is a Union of type Tensors."""
- is_union = is_union_type(type_)
- if is_union is None:
- return False
- else:
- return is_union and all(
- (is_type_tensor(t) or issubclass(t, type(None))) for t in get_args(type_)
- )
-
-
-def change_cls_name(cls: type, new_name: str, scope: Optional[dict] = None) -> None:
- """Change the name of a class.
-
- :param cls: the class to change the name of
- :param new_name: the new name
- :param scope: the scope in which the class is defined
- """
- if scope:
- scope[new_name] = cls
- cls.__qualname__ = cls.__qualname__[: -len(cls.__name__)] + new_name
- cls.__name__ = new_name
-
-
-def safe_issubclass(x: type, a_tuple: type) -> bool:
- """
- This is a modified version of the built-in 'issubclass' function to support non-class input.
- Traditional 'issubclass' calls can result in a crash if the input is non-class type (e.g. list/tuple).
-
- :param x: A class 'x'
- :param a_tuple: A class, or a tuple of classes.
- :return: A boolean value - 'True' if 'x' is a subclass of 'A_tuple', 'False' otherwise.
- Note that if the origin of 'x' is a list or tuple, the function immediately returns 'False'.
- """
- if (get_origin(x) in (list, tuple, dict, set)) or is_typevar(x):
- return False
- return issubclass(x, a_tuple)
diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/__init__.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/__init__.py
deleted file mode 100644
index 3407398b08379f975aa59cb35e731b82d2a50360..0000000000000000000000000000000000000000
--- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/priors/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from . import fast_gp, mlp, flexible_categorical, differentiable_prior, prior_bag
-
-
-
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/debug.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/debug.py
deleted file mode 100644
index 2a3e7d298f393ed8532e4f11913635efc94cb329..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/debug.py
+++ /dev/null
@@ -1,199 +0,0 @@
-import importlib.resources
-import locale
-import logging
-import os
-import sys
-from optparse import Values
-from types import ModuleType
-from typing import Any, Dict, List, Optional
-
-import pip._vendor
-from pip._vendor.certifi import where
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.cmdoptions import make_target_python
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.configuration import Configuration
-from pip._internal.metadata import get_environment
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import get_pip_version
-
-logger = logging.getLogger(__name__)
-
-
-def show_value(name: str, value: Any) -> None:
- logger.info("%s: %s", name, value)
-
-
-def show_sys_implementation() -> None:
- logger.info("sys.implementation:")
- implementation_name = sys.implementation.name
- with indent_log():
- show_value("name", implementation_name)
-
-
-def create_vendor_txt_map() -> Dict[str, str]:
- with importlib.resources.open_text("pip._vendor", "vendor.txt") as f:
- # Purge non version specifying lines.
- # Also, remove any space prefix or suffixes (including comments).
- lines = [
- line.strip().split(" ", 1)[0] for line in f.readlines() if "==" in line
- ]
-
- # Transform into "module" -> version dict.
- return dict(line.split("==", 1) for line in lines)
-
-
-def get_module_from_module_name(module_name: str) -> ModuleType:
- # Module name can be uppercase in vendor.txt for some reason...
- module_name = module_name.lower().replace("-", "_")
- # PATCH: setuptools is actually only pkg_resources.
- if module_name == "setuptools":
- module_name = "pkg_resources"
-
- __import__(f"pip._vendor.{module_name}", globals(), locals(), level=0)
- return getattr(pip._vendor, module_name)
-
-
-def get_vendor_version_from_module(module_name: str) -> Optional[str]:
- module = get_module_from_module_name(module_name)
- version = getattr(module, "__version__", None)
-
- if not version:
- # Try to find version in debundled module info.
- assert module.__file__ is not None
- env = get_environment([os.path.dirname(module.__file__)])
- dist = env.get_distribution(module_name)
- if dist:
- version = str(dist.version)
-
- return version
-
-
-def show_actual_vendor_versions(vendor_txt_versions: Dict[str, str]) -> None:
- """Log the actual version and print extra info if there is
- a conflict or if the actual version could not be imported.
- """
- for module_name, expected_version in vendor_txt_versions.items():
- extra_message = ""
- actual_version = get_vendor_version_from_module(module_name)
- if not actual_version:
- extra_message = (
- " (Unable to locate actual module version, using"
- " vendor.txt specified version)"
- )
- actual_version = expected_version
- elif parse_version(actual_version) != parse_version(expected_version):
- extra_message = (
- " (CONFLICT: vendor.txt suggests version should"
- " be {})".format(expected_version)
- )
- logger.info("%s==%s%s", module_name, actual_version, extra_message)
-
-
-def show_vendor_versions() -> None:
- logger.info("vendored library versions:")
-
- vendor_txt_versions = create_vendor_txt_map()
- with indent_log():
- show_actual_vendor_versions(vendor_txt_versions)
-
-
-def show_tags(options: Values) -> None:
- tag_limit = 10
-
- target_python = make_target_python(options)
- tags = target_python.get_tags()
-
- # Display the target options that were explicitly provided.
- formatted_target = target_python.format_given()
- suffix = ""
- if formatted_target:
- suffix = f" (target: {formatted_target})"
-
- msg = "Compatible tags: {}{}".format(len(tags), suffix)
- logger.info(msg)
-
- if options.verbose < 1 and len(tags) > tag_limit:
- tags_limited = True
- tags = tags[:tag_limit]
- else:
- tags_limited = False
-
- with indent_log():
- for tag in tags:
- logger.info(str(tag))
-
- if tags_limited:
- msg = (
- "...\n[First {tag_limit} tags shown. Pass --verbose to show all.]"
- ).format(tag_limit=tag_limit)
- logger.info(msg)
-
-
-def ca_bundle_info(config: Configuration) -> str:
- levels = set()
- for key, _ in config.items():
- levels.add(key.split(".")[0])
-
- if not levels:
- return "Not specified"
-
- levels_that_override_global = ["install", "wheel", "download"]
- global_overriding_level = [
- level for level in levels if level in levels_that_override_global
- ]
- if not global_overriding_level:
- return "global"
-
- if "global" in levels:
- levels.remove("global")
- return ", ".join(levels)
-
-
-class DebugCommand(Command):
- """
- Display debug information.
- """
-
- usage = """
- %prog """
- ignore_require_venv = True
-
- def add_options(self) -> None:
- cmdoptions.add_target_python_options(self.cmd_opts)
- self.parser.insert_option_group(0, self.cmd_opts)
- self.parser.config.load()
-
- def run(self, options: Values, args: List[str]) -> int:
- logger.warning(
- "This command is only meant for debugging. "
- "Do not use this with automation for parsing and getting these "
- "details, since the output and options of this command may "
- "change without notice."
- )
- show_value("pip version", get_pip_version())
- show_value("sys.version", sys.version)
- show_value("sys.executable", sys.executable)
- show_value("sys.getdefaultencoding", sys.getdefaultencoding())
- show_value("sys.getfilesystemencoding", sys.getfilesystemencoding())
- show_value(
- "locale.getpreferredencoding",
- locale.getpreferredencoding(),
- )
- show_value("sys.platform", sys.platform)
- show_sys_implementation()
-
- show_value("'cert' config value", ca_bundle_info(self.parser.config))
- show_value("REQUESTS_CA_BUNDLE", os.environ.get("REQUESTS_CA_BUNDLE"))
- show_value("CURL_CA_BUNDLE", os.environ.get("CURL_CA_BUNDLE"))
- show_value("pip._vendor.certifi.where()", where())
- show_value("pip._vendor.DEBUNDLED", pip._vendor.DEBUNDLED)
-
- show_vendor_versions()
-
- show_tags(options)
-
- return SUCCESS
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_loop.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_loop.py
deleted file mode 100644
index 01c6cafbe53f1fcb12f7b382b2b35e2fd2c69933..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_loop.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from typing import Iterable, Tuple, TypeVar
-
-T = TypeVar("T")
-
-
-def loop_first(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
- """Iterate and generate a tuple with a flag for first value."""
- iter_values = iter(values)
- try:
- value = next(iter_values)
- except StopIteration:
- return
- yield True, value
- for value in iter_values:
- yield False, value
-
-
-def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
- """Iterate and generate a tuple with a flag for last value."""
- iter_values = iter(values)
- try:
- previous_value = next(iter_values)
- except StopIteration:
- return
- for value in iter_values:
- yield False, previous_value
- previous_value = value
- yield True, previous_value
-
-
-def loop_first_last(values: Iterable[T]) -> Iterable[Tuple[bool, bool, T]]:
- """Iterate and generate a tuple with a flag for first and last value."""
- iter_values = iter(values)
- try:
- previous_value = next(iter_values)
- except StopIteration:
- return
- first = True
- for value in iter_values:
- yield first, False, previous_value
- first = False
- previous_value = value
- yield first, True, previous_value
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py
deleted file mode 100644
index 8765b907d70c4a530bc90dc88f24b3df73473b01..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/_appengine_environ.py
+++ /dev/null
@@ -1,36 +0,0 @@
-"""
-This module provides means to detect the App Engine environment.
-"""
-
-import os
-
-
-def is_appengine():
- return is_local_appengine() or is_prod_appengine()
-
-
-def is_appengine_sandbox():
- """Reports if the app is running in the first generation sandbox.
-
- The second generation runtimes are technically still in a sandbox, but it
- is much less restrictive, so generally you shouldn't need to check for it.
- see https://cloud.google.com/appengine/docs/standard/runtimes
- """
- return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27"
-
-
-def is_local_appengine():
- return "APPENGINE_RUNTIME" in os.environ and os.environ.get(
- "SERVER_SOFTWARE", ""
- ).startswith("Development/")
-
-
-def is_prod_appengine():
- return "APPENGINE_RUNTIME" in os.environ and os.environ.get(
- "SERVER_SOFTWARE", ""
- ).startswith("Google App Engine/")
-
-
-def is_prod_appengine_mvms():
- """Deprecated."""
- return False
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/exceptions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/exceptions.py
deleted file mode 100644
index cba6f3f560f71b3b15ab6aaf21dde4f1bba1bd00..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/exceptions.py
+++ /dev/null
@@ -1,323 +0,0 @@
-from __future__ import absolute_import
-
-from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead
-
-# Base Exceptions
-
-
-class HTTPError(Exception):
- """Base exception used by this module."""
-
- pass
-
-
-class HTTPWarning(Warning):
- """Base warning used by this module."""
-
- pass
-
-
-class PoolError(HTTPError):
- """Base exception for errors caused within a pool."""
-
- def __init__(self, pool, message):
- self.pool = pool
- HTTPError.__init__(self, "%s: %s" % (pool, message))
-
- def __reduce__(self):
- # For pickling purposes.
- return self.__class__, (None, None)
-
-
-class RequestError(PoolError):
- """Base exception for PoolErrors that have associated URLs."""
-
- def __init__(self, pool, url, message):
- self.url = url
- PoolError.__init__(self, pool, message)
-
- def __reduce__(self):
- # For pickling purposes.
- return self.__class__, (None, self.url, None)
-
-
-class SSLError(HTTPError):
- """Raised when SSL certificate fails in an HTTPS connection."""
-
- pass
-
-
-class ProxyError(HTTPError):
- """Raised when the connection to a proxy fails."""
-
- def __init__(self, message, error, *args):
- super(ProxyError, self).__init__(message, error, *args)
- self.original_error = error
-
-
-class DecodeError(HTTPError):
- """Raised when automatic decoding based on Content-Type fails."""
-
- pass
-
-
-class ProtocolError(HTTPError):
- """Raised when something unexpected happens mid-request/response."""
-
- pass
-
-
-#: Renamed to ProtocolError but aliased for backwards compatibility.
-ConnectionError = ProtocolError
-
-
-# Leaf Exceptions
-
-
-class MaxRetryError(RequestError):
- """Raised when the maximum number of retries is exceeded.
-
- :param pool: The connection pool
- :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
- :param string url: The requested Url
- :param exceptions.Exception reason: The underlying error
-
- """
-
- def __init__(self, pool, url, reason=None):
- self.reason = reason
-
- message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason)
-
- RequestError.__init__(self, pool, url, message)
-
-
-class HostChangedError(RequestError):
- """Raised when an existing pool gets a request for a foreign host."""
-
- def __init__(self, pool, url, retries=3):
- message = "Tried to open a foreign host with url: %s" % url
- RequestError.__init__(self, pool, url, message)
- self.retries = retries
-
-
-class TimeoutStateError(HTTPError):
- """Raised when passing an invalid state to a timeout"""
-
- pass
-
-
-class TimeoutError(HTTPError):
- """Raised when a socket timeout error occurs.
-
- Catching this error will catch both :exc:`ReadTimeoutErrors
- ` and :exc:`ConnectTimeoutErrors `.
- """
-
- pass
-
-
-class ReadTimeoutError(TimeoutError, RequestError):
- """Raised when a socket timeout occurs while receiving data from a server"""
-
- pass
-
-
-# This timeout error does not have a URL attached and needs to inherit from the
-# base HTTPError
-class ConnectTimeoutError(TimeoutError):
- """Raised when a socket timeout occurs while connecting to a server"""
-
- pass
-
-
-class NewConnectionError(ConnectTimeoutError, PoolError):
- """Raised when we fail to establish a new connection. Usually ECONNREFUSED."""
-
- pass
-
-
-class EmptyPoolError(PoolError):
- """Raised when a pool runs out of connections and no more are allowed."""
-
- pass
-
-
-class ClosedPoolError(PoolError):
- """Raised when a request enters a pool after the pool has been closed."""
-
- pass
-
-
-class LocationValueError(ValueError, HTTPError):
- """Raised when there is something wrong with a given URL input."""
-
- pass
-
-
-class LocationParseError(LocationValueError):
- """Raised when get_host or similar fails to parse the URL input."""
-
- def __init__(self, location):
- message = "Failed to parse: %s" % location
- HTTPError.__init__(self, message)
-
- self.location = location
-
-
-class URLSchemeUnknown(LocationValueError):
- """Raised when a URL input has an unsupported scheme."""
-
- def __init__(self, scheme):
- message = "Not supported URL scheme %s" % scheme
- super(URLSchemeUnknown, self).__init__(message)
-
- self.scheme = scheme
-
-
-class ResponseError(HTTPError):
- """Used as a container for an error reason supplied in a MaxRetryError."""
-
- GENERIC_ERROR = "too many error responses"
- SPECIFIC_ERROR = "too many {status_code} error responses"
-
-
-class SecurityWarning(HTTPWarning):
- """Warned when performing security reducing actions"""
-
- pass
-
-
-class SubjectAltNameWarning(SecurityWarning):
- """Warned when connecting to a host with a certificate missing a SAN."""
-
- pass
-
-
-class InsecureRequestWarning(SecurityWarning):
- """Warned when making an unverified HTTPS request."""
-
- pass
-
-
-class SystemTimeWarning(SecurityWarning):
- """Warned when system time is suspected to be wrong"""
-
- pass
-
-
-class InsecurePlatformWarning(SecurityWarning):
- """Warned when certain TLS/SSL configuration is not available on a platform."""
-
- pass
-
-
-class SNIMissingWarning(HTTPWarning):
- """Warned when making a HTTPS request without SNI available."""
-
- pass
-
-
-class DependencyWarning(HTTPWarning):
- """
- Warned when an attempt is made to import a module with missing optional
- dependencies.
- """
-
- pass
-
-
-class ResponseNotChunked(ProtocolError, ValueError):
- """Response needs to be chunked in order to read it as chunks."""
-
- pass
-
-
-class BodyNotHttplibCompatible(HTTPError):
- """
- Body should be :class:`http.client.HTTPResponse` like
- (have an fp attribute which returns raw chunks) for read_chunked().
- """
-
- pass
-
-
-class IncompleteRead(HTTPError, httplib_IncompleteRead):
- """
- Response length doesn't match expected Content-Length
-
- Subclass of :class:`http.client.IncompleteRead` to allow int value
- for ``partial`` to avoid creating large objects on streamed reads.
- """
-
- def __init__(self, partial, expected):
- super(IncompleteRead, self).__init__(partial, expected)
-
- def __repr__(self):
- return "IncompleteRead(%i bytes read, %i more expected)" % (
- self.partial,
- self.expected,
- )
-
-
-class InvalidChunkLength(HTTPError, httplib_IncompleteRead):
- """Invalid chunk length in a chunked response."""
-
- def __init__(self, response, length):
- super(InvalidChunkLength, self).__init__(
- response.tell(), response.length_remaining
- )
- self.response = response
- self.length = length
-
- def __repr__(self):
- return "InvalidChunkLength(got length %r, %i bytes read)" % (
- self.length,
- self.partial,
- )
-
-
-class InvalidHeader(HTTPError):
- """The header provided was somehow invalid."""
-
- pass
-
-
-class ProxySchemeUnknown(AssertionError, URLSchemeUnknown):
- """ProxyManager does not support the supplied scheme"""
-
- # TODO(t-8ch): Stop inheriting from AssertionError in v2.0.
-
- def __init__(self, scheme):
- # 'localhost' is here because our URL parser parses
- # localhost:8080 -> scheme=localhost, remove if we fix this.
- if scheme == "localhost":
- scheme = None
- if scheme is None:
- message = "Proxy URL had no scheme, should start with http:// or https://"
- else:
- message = (
- "Proxy URL had unsupported scheme %s, should use http:// or https://"
- % scheme
- )
- super(ProxySchemeUnknown, self).__init__(message)
-
-
-class ProxySchemeUnsupported(ValueError):
- """Fetching HTTPS resources through HTTPS proxies is unsupported"""
-
- pass
-
-
-class HeaderParsingError(HTTPError):
- """Raised by assert_header_parsing, but we convert it to a log.warning statement."""
-
- def __init__(self, defects, unparsed_data):
- message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data)
- super(HeaderParsingError, self).__init__(message)
-
-
-class UnrewindableBodyError(HTTPError):
- """urllib3 encountered an error when trying to rewind a body"""
-
- pass
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/version.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/version.py
deleted file mode 100644
index 9f6eb98e8f0ef41d6fab05af6e9c24c5ef8a04b8..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/platformdirs/version.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# file generated by setuptools_scm
-# don't change, don't track in version control
-__version__ = version = '2.6.2'
-__version_tuple__ = version_tuple = (2, 6, 2)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvccompiler.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvccompiler.py
deleted file mode 100644
index c3823e257ef1de3a79d7f297f38f3bf0bf06f24b..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvccompiler.py
+++ /dev/null
@@ -1,692 +0,0 @@
-"""distutils.msvccompiler
-
-Contains MSVCCompiler, an implementation of the abstract CCompiler class
-for the Microsoft Visual Studio.
-"""
-
-# Written by Perry Stoll
-# hacked by Robin Becker and Thomas Heller to do a better job of
-# finding DevStudio (through the registry)
-
-import sys
-import os
-import warnings
-from .errors import (
- DistutilsExecError,
- DistutilsPlatformError,
- CompileError,
- LibError,
- LinkError,
-)
-from .ccompiler import CCompiler, gen_lib_options
-from ._log import log
-
-_can_read_reg = False
-try:
- import winreg
-
- _can_read_reg = True
- hkey_mod = winreg
-
- RegOpenKeyEx = winreg.OpenKeyEx
- RegEnumKey = winreg.EnumKey
- RegEnumValue = winreg.EnumValue
- RegError = winreg.error
-
-except ImportError:
- try:
- import win32api
- import win32con
-
- _can_read_reg = True
- hkey_mod = win32con
-
- RegOpenKeyEx = win32api.RegOpenKeyEx
- RegEnumKey = win32api.RegEnumKey
- RegEnumValue = win32api.RegEnumValue
- RegError = win32api.error
- except ImportError:
- log.info(
- "Warning: Can't read registry to find the "
- "necessary compiler setting\n"
- "Make sure that Python modules winreg, "
- "win32api or win32con are installed."
- )
- pass
-
-if _can_read_reg:
- HKEYS = (
- hkey_mod.HKEY_USERS,
- hkey_mod.HKEY_CURRENT_USER,
- hkey_mod.HKEY_LOCAL_MACHINE,
- hkey_mod.HKEY_CLASSES_ROOT,
- )
-
-
-warnings.warn(
- "msvccompiler is deprecated and slated to be removed "
- "in the future. Please discontinue use or file an issue "
- "with pypa/distutils describing your use case.",
- DeprecationWarning,
-)
-
-
-def read_keys(base, key):
- """Return list of registry keys."""
- try:
- handle = RegOpenKeyEx(base, key)
- except RegError:
- return None
- L = []
- i = 0
- while True:
- try:
- k = RegEnumKey(handle, i)
- except RegError:
- break
- L.append(k)
- i += 1
- return L
-
-
-def read_values(base, key):
- """Return dict of registry keys and values.
-
- All names are converted to lowercase.
- """
- try:
- handle = RegOpenKeyEx(base, key)
- except RegError:
- return None
- d = {}
- i = 0
- while True:
- try:
- name, value, type = RegEnumValue(handle, i)
- except RegError:
- break
- name = name.lower()
- d[convert_mbcs(name)] = convert_mbcs(value)
- i += 1
- return d
-
-
-def convert_mbcs(s):
- dec = getattr(s, "decode", None)
- if dec is not None:
- try:
- s = dec("mbcs")
- except UnicodeError:
- pass
- return s
-
-
-class MacroExpander:
- def __init__(self, version):
- self.macros = {}
- self.load_macros(version)
-
- def set_macro(self, macro, path, key):
- for base in HKEYS:
- d = read_values(base, path)
- if d:
- self.macros["$(%s)" % macro] = d[key]
- break
-
- def load_macros(self, version):
- vsbase = r"Software\Microsoft\VisualStudio\%0.1f" % version
- self.set_macro("VCInstallDir", vsbase + r"\Setup\VC", "productdir")
- self.set_macro("VSInstallDir", vsbase + r"\Setup\VS", "productdir")
- net = r"Software\Microsoft\.NETFramework"
- self.set_macro("FrameworkDir", net, "installroot")
- try:
- if version > 7.0:
- self.set_macro("FrameworkSDKDir", net, "sdkinstallrootv1.1")
- else:
- self.set_macro("FrameworkSDKDir", net, "sdkinstallroot")
- except KeyError:
- raise DistutilsPlatformError(
- """Python was built with Visual Studio 2003;
-extensions must be built with a compiler than can generate compatible binaries.
-Visual Studio 2003 was not found on this system. If you have Cygwin installed,
-you can try compiling with MingW32, by passing "-c mingw32" to setup.py."""
- )
-
- p = r"Software\Microsoft\NET Framework Setup\Product"
- for base in HKEYS:
- try:
- h = RegOpenKeyEx(base, p)
- except RegError:
- continue
- key = RegEnumKey(h, 0)
- d = read_values(base, r"{}\{}".format(p, key))
- self.macros["$(FrameworkVersion)"] = d["version"]
-
- def sub(self, s):
- for k, v in self.macros.items():
- s = s.replace(k, v)
- return s
-
-
-def get_build_version():
- """Return the version of MSVC that was used to build Python.
-
- For Python 2.3 and up, the version number is included in
- sys.version. For earlier versions, assume the compiler is MSVC 6.
- """
- prefix = "MSC v."
- i = sys.version.find(prefix)
- if i == -1:
- return 6
- i = i + len(prefix)
- s, rest = sys.version[i:].split(" ", 1)
- majorVersion = int(s[:-2]) - 6
- if majorVersion >= 13:
- # v13 was skipped and should be v14
- majorVersion += 1
- minorVersion = int(s[2:3]) / 10.0
- # I don't think paths are affected by minor version in version 6
- if majorVersion == 6:
- minorVersion = 0
- if majorVersion >= 6:
- return majorVersion + minorVersion
- # else we don't know what version of the compiler this is
- return None
-
-
-def get_build_architecture():
- """Return the processor architecture.
-
- Possible results are "Intel" or "AMD64".
- """
-
- prefix = " bit ("
- i = sys.version.find(prefix)
- if i == -1:
- return "Intel"
- j = sys.version.find(")", i)
- return sys.version[i + len(prefix) : j]
-
-
-def normalize_and_reduce_paths(paths):
- """Return a list of normalized paths with duplicates removed.
-
- The current order of paths is maintained.
- """
- # Paths are normalized so things like: /a and /a/ aren't both preserved.
- reduced_paths = []
- for p in paths:
- np = os.path.normpath(p)
- # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set.
- if np not in reduced_paths:
- reduced_paths.append(np)
- return reduced_paths
-
-
-class MSVCCompiler(CCompiler):
- """Concrete class that implements an interface to Microsoft Visual C++,
- as defined by the CCompiler abstract class."""
-
- compiler_type = 'msvc'
-
- # Just set this so CCompiler's constructor doesn't barf. We currently
- # don't use the 'set_executables()' bureaucracy provided by CCompiler,
- # as it really isn't necessary for this sort of single-compiler class.
- # Would be nice to have a consistent interface with UnixCCompiler,
- # though, so it's worth thinking about.
- executables = {}
-
- # Private class data (need to distinguish C from C++ source for compiler)
- _c_extensions = ['.c']
- _cpp_extensions = ['.cc', '.cpp', '.cxx']
- _rc_extensions = ['.rc']
- _mc_extensions = ['.mc']
-
- # Needed for the filename generation methods provided by the
- # base class, CCompiler.
- src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions
- res_extension = '.res'
- obj_extension = '.obj'
- static_lib_extension = '.lib'
- shared_lib_extension = '.dll'
- static_lib_format = shared_lib_format = '%s%s'
- exe_extension = '.exe'
-
- def __init__(self, verbose=0, dry_run=0, force=0):
- super().__init__(verbose, dry_run, force)
- self.__version = get_build_version()
- self.__arch = get_build_architecture()
- if self.__arch == "Intel":
- # x86
- if self.__version >= 7:
- self.__root = r"Software\Microsoft\VisualStudio"
- self.__macros = MacroExpander(self.__version)
- else:
- self.__root = r"Software\Microsoft\Devstudio"
- self.__product = "Visual Studio version %s" % self.__version
- else:
- # Win64. Assume this was built with the platform SDK
- self.__product = "Microsoft SDK compiler %s" % (self.__version + 6)
-
- self.initialized = False
-
- def initialize(self):
- self.__paths = []
- if (
- "DISTUTILS_USE_SDK" in os.environ
- and "MSSdk" in os.environ
- and self.find_exe("cl.exe")
- ):
- # Assume that the SDK set up everything alright; don't try to be
- # smarter
- self.cc = "cl.exe"
- self.linker = "link.exe"
- self.lib = "lib.exe"
- self.rc = "rc.exe"
- self.mc = "mc.exe"
- else:
- self.__paths = self.get_msvc_paths("path")
-
- if len(self.__paths) == 0:
- raise DistutilsPlatformError(
- "Python was built with %s, "
- "and extensions need to be built with the same "
- "version of the compiler, but it isn't installed." % self.__product
- )
-
- self.cc = self.find_exe("cl.exe")
- self.linker = self.find_exe("link.exe")
- self.lib = self.find_exe("lib.exe")
- self.rc = self.find_exe("rc.exe") # resource compiler
- self.mc = self.find_exe("mc.exe") # message compiler
- self.set_path_env_var('lib')
- self.set_path_env_var('include')
-
- # extend the MSVC path with the current path
- try:
- for p in os.environ['path'].split(';'):
- self.__paths.append(p)
- except KeyError:
- pass
- self.__paths = normalize_and_reduce_paths(self.__paths)
- os.environ['path'] = ";".join(self.__paths)
-
- self.preprocess_options = None
- if self.__arch == "Intel":
- self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GX', '/DNDEBUG']
- self.compile_options_debug = [
- '/nologo',
- '/Od',
- '/MDd',
- '/W3',
- '/GX',
- '/Z7',
- '/D_DEBUG',
- ]
- else:
- # Win64
- self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GS-', '/DNDEBUG']
- self.compile_options_debug = [
- '/nologo',
- '/Od',
- '/MDd',
- '/W3',
- '/GS-',
- '/Z7',
- '/D_DEBUG',
- ]
-
- self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO']
- if self.__version >= 7:
- self.ldflags_shared_debug = ['/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG']
- else:
- self.ldflags_shared_debug = [
- '/DLL',
- '/nologo',
- '/INCREMENTAL:no',
- '/pdb:None',
- '/DEBUG',
- ]
- self.ldflags_static = ['/nologo']
-
- self.initialized = True
-
- # -- Worker methods ------------------------------------------------
-
- def object_filenames(self, source_filenames, strip_dir=0, output_dir=''):
- # Copied from ccompiler.py, extended to return .res as 'object'-file
- # for .rc input file
- if output_dir is None:
- output_dir = ''
- obj_names = []
- for src_name in source_filenames:
- (base, ext) = os.path.splitext(src_name)
- base = os.path.splitdrive(base)[1] # Chop off the drive
- base = base[os.path.isabs(base) :] # If abs, chop off leading /
- if ext not in self.src_extensions:
- # Better to raise an exception instead of silently continuing
- # and later complain about sources and targets having
- # different lengths
- raise CompileError("Don't know how to compile %s" % src_name)
- if strip_dir:
- base = os.path.basename(base)
- if ext in self._rc_extensions:
- obj_names.append(os.path.join(output_dir, base + self.res_extension))
- elif ext in self._mc_extensions:
- obj_names.append(os.path.join(output_dir, base + self.res_extension))
- else:
- obj_names.append(os.path.join(output_dir, base + self.obj_extension))
- return obj_names
-
- def compile( # noqa: C901
- self,
- sources,
- output_dir=None,
- macros=None,
- include_dirs=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- depends=None,
- ):
- if not self.initialized:
- self.initialize()
- compile_info = self._setup_compile(
- output_dir, macros, include_dirs, sources, depends, extra_postargs
- )
- macros, objects, extra_postargs, pp_opts, build = compile_info
-
- compile_opts = extra_preargs or []
- compile_opts.append('/c')
- if debug:
- compile_opts.extend(self.compile_options_debug)
- else:
- compile_opts.extend(self.compile_options)
-
- for obj in objects:
- try:
- src, ext = build[obj]
- except KeyError:
- continue
- if debug:
- # pass the full pathname to MSVC in debug mode,
- # this allows the debugger to find the source file
- # without asking the user to browse for it
- src = os.path.abspath(src)
-
- if ext in self._c_extensions:
- input_opt = "/Tc" + src
- elif ext in self._cpp_extensions:
- input_opt = "/Tp" + src
- elif ext in self._rc_extensions:
- # compile .RC to .RES file
- input_opt = src
- output_opt = "/fo" + obj
- try:
- self.spawn([self.rc] + pp_opts + [output_opt] + [input_opt])
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- elif ext in self._mc_extensions:
- # Compile .MC to .RC file to .RES file.
- # * '-h dir' specifies the directory for the
- # generated include file
- # * '-r dir' specifies the target directory of the
- # generated RC file and the binary message resource
- # it includes
- #
- # For now (since there are no options to change this),
- # we use the source-directory for the include file and
- # the build directory for the RC file and message
- # resources. This works at least for win32all.
- h_dir = os.path.dirname(src)
- rc_dir = os.path.dirname(obj)
- try:
- # first compile .MC to .RC and .H file
- self.spawn([self.mc] + ['-h', h_dir, '-r', rc_dir] + [src])
- base, _ = os.path.splitext(os.path.basename(src))
- rc_file = os.path.join(rc_dir, base + '.rc')
- # then compile .RC to .RES file
- self.spawn([self.rc] + ["/fo" + obj] + [rc_file])
-
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- else:
- # how to handle this file?
- raise CompileError(
- "Don't know how to compile {} to {}".format(src, obj)
- )
-
- output_opt = "/Fo" + obj
- try:
- self.spawn(
- [self.cc]
- + compile_opts
- + pp_opts
- + [input_opt, output_opt]
- + extra_postargs
- )
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- return objects
-
- def create_static_lib(
- self, objects, output_libname, output_dir=None, debug=0, target_lang=None
- ):
- if not self.initialized:
- self.initialize()
- (objects, output_dir) = self._fix_object_args(objects, output_dir)
- output_filename = self.library_filename(output_libname, output_dir=output_dir)
-
- if self._need_link(objects, output_filename):
- lib_args = objects + ['/OUT:' + output_filename]
- if debug:
- pass # XXX what goes here?
- try:
- self.spawn([self.lib] + lib_args)
- except DistutilsExecError as msg:
- raise LibError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def link( # noqa: C901
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
- if not self.initialized:
- self.initialize()
- (objects, output_dir) = self._fix_object_args(objects, output_dir)
- fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs)
- (libraries, library_dirs, runtime_library_dirs) = fixed_args
-
- if runtime_library_dirs:
- self.warn(
- "I don't know what to do with 'runtime_library_dirs': "
- + str(runtime_library_dirs)
- )
-
- lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries)
- if output_dir is not None:
- output_filename = os.path.join(output_dir, output_filename)
-
- if self._need_link(objects, output_filename):
- if target_desc == CCompiler.EXECUTABLE:
- if debug:
- ldflags = self.ldflags_shared_debug[1:]
- else:
- ldflags = self.ldflags_shared[1:]
- else:
- if debug:
- ldflags = self.ldflags_shared_debug
- else:
- ldflags = self.ldflags_shared
-
- export_opts = []
- for sym in export_symbols or []:
- export_opts.append("/EXPORT:" + sym)
-
- ld_args = (
- ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename]
- )
-
- # The MSVC linker generates .lib and .exp files, which cannot be
- # suppressed by any linker switches. The .lib files may even be
- # needed! Make sure they are generated in the temporary build
- # directory. Since they have different names for debug and release
- # builds, they can go into the same directory.
- if export_symbols is not None:
- (dll_name, dll_ext) = os.path.splitext(
- os.path.basename(output_filename)
- )
- implib_file = os.path.join(
- os.path.dirname(objects[0]), self.library_filename(dll_name)
- )
- ld_args.append('/IMPLIB:' + implib_file)
-
- if extra_preargs:
- ld_args[:0] = extra_preargs
- if extra_postargs:
- ld_args.extend(extra_postargs)
-
- self.mkpath(os.path.dirname(output_filename))
- try:
- self.spawn([self.linker] + ld_args)
- except DistutilsExecError as msg:
- raise LinkError(msg)
-
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- # -- Miscellaneous methods -----------------------------------------
- # These are all used by the 'gen_lib_options() function, in
- # ccompiler.py.
-
- def library_dir_option(self, dir):
- return "/LIBPATH:" + dir
-
- def runtime_library_dir_option(self, dir):
- raise DistutilsPlatformError(
- "don't know how to set runtime library search path for MSVC++"
- )
-
- def library_option(self, lib):
- return self.library_filename(lib)
-
- def find_library_file(self, dirs, lib, debug=0):
- # Prefer a debugging library if found (and requested), but deal
- # with it if we don't have one.
- if debug:
- try_names = [lib + "_d", lib]
- else:
- try_names = [lib]
- for dir in dirs:
- for name in try_names:
- libfile = os.path.join(dir, self.library_filename(name))
- if os.path.exists(libfile):
- return libfile
- else:
- # Oops, didn't find it in *any* of 'dirs'
- return None
-
- # Helper methods for using the MSVC registry settings
-
- def find_exe(self, exe):
- """Return path to an MSVC executable program.
-
- Tries to find the program in several places: first, one of the
- MSVC program search paths from the registry; next, the directories
- in the PATH environment variable. If any of those work, return an
- absolute path that is known to exist. If none of them work, just
- return the original program name, 'exe'.
- """
- for p in self.__paths:
- fn = os.path.join(os.path.abspath(p), exe)
- if os.path.isfile(fn):
- return fn
-
- # didn't find it; try existing path
- for p in os.environ['Path'].split(';'):
- fn = os.path.join(os.path.abspath(p), exe)
- if os.path.isfile(fn):
- return fn
-
- return exe
-
- def get_msvc_paths(self, path, platform='x86'):
- """Get a list of devstudio directories (include, lib or path).
-
- Return a list of strings. The list will be empty if unable to
- access the registry or appropriate registry keys not found.
- """
- if not _can_read_reg:
- return []
-
- path = path + " dirs"
- if self.__version >= 7:
- key = r"{}\{:0.1f}\VC\VC_OBJECTS_PLATFORM_INFO\Win32\Directories".format(
- self.__root,
- self.__version,
- )
- else:
- key = (
- r"%s\6.0\Build System\Components\Platforms"
- r"\Win32 (%s)\Directories" % (self.__root, platform)
- )
-
- for base in HKEYS:
- d = read_values(base, key)
- if d:
- if self.__version >= 7:
- return self.__macros.sub(d[path]).split(";")
- else:
- return d[path].split(";")
- # MSVC 6 seems to create the registry entries we need only when
- # the GUI is run.
- if self.__version == 6:
- for base in HKEYS:
- if read_values(base, r"%s\6.0" % self.__root) is not None:
- self.warn(
- "It seems you have Visual Studio 6 installed, "
- "but the expected registry settings are not present.\n"
- "You must at least run the Visual Studio GUI once "
- "so that these entries are created."
- )
- break
- return []
-
- def set_path_env_var(self, name):
- """Set environment variable 'name' to an MSVC path type value.
-
- This is equivalent to a SET command prior to execution of spawned
- commands.
- """
-
- if name == "lib":
- p = self.get_msvc_paths("library")
- else:
- p = self.get_msvc_paths(name)
- if p:
- os.environ[name] = ';'.join(p)
-
-
-if get_build_version() >= 8.0:
- log.debug("Importing new compiler from distutils.msvc9compiler")
- OldMSVCCompiler = MSVCCompiler
- from distutils.msvc9compiler import MSVCCompiler
-
- # get_build_architecture not really relevant now we support cross-compile
- from distutils.msvc9compiler import MacroExpander # noqa: F811
diff --git a/spaces/Tej3/ECG_Classification/models/CNN.py b/spaces/Tej3/ECG_Classification/models/CNN.py
deleted file mode 100644
index 6a4a8f22eabf0b3dc1d2255974d5384bea427204..0000000000000000000000000000000000000000
--- a/spaces/Tej3/ECG_Classification/models/CNN.py
+++ /dev/null
@@ -1,213 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchinfo import summary
-
-# Not in use yet
-class Conv1d_layer(nn.Module):
- def __init__(self, in_channel, out_channel, kernel_size) -> None:
- super().__init__()
- self.conv = nn.Conv1d(in_channels=in_channel, out_channels=out_channel, kernel_size=kernel_size)
- self.batch_norm = torch.nn.BatchNorm1d(out_channel)
- self.dropout = nn.Dropout1d(p=0.5)
-
- def forward(self, x):
- x= self.conv(x)
- x = self.batch_norm(x)
- x = self.dropout(x)
- return x
-
-class CNN(nn.Module):
- def __init__(self, ecg_channels=12):
- super(CNN, self).__init__()
- self.name = "CNN"
- self.conv1 = nn.Conv1d(ecg_channels, 16, 7)
- self.pool1 = nn.MaxPool1d(2, 2)
- self.conv2 = nn.Conv1d(16, 32, 5)
- self.pool2 = nn.MaxPool1d(2, 2)
- self.conv3 = nn.Conv1d(32, 48, 3)
- self.pool3 = nn.MaxPool1d(2, 2)
- self.fc0 = nn.Linear(5856, 512)
- self.fc1 = nn.Linear(512, 128)
- self.fc2 = nn.Linear(128, 5)
- self.activation = nn.ReLU()
- def forward(self, x, notes=None):
- x = self.pool1(self.activation(self.conv1(x)))
- x = self.pool2(self.activation(self.conv2(x)))
- x = self.pool3(self.activation(self.conv3(x)))
- x = x.view(x.size(0),-1)
- x = self.activation(self.fc0(x))
- x = self.activation(self.fc1(x))
- x = self.fc2(x)
- x = x.squeeze(1)
- return x
-
-
-class MMCNN_SUM(nn.Module):
- def __init__(self, ecg_channels=12):
- super(MMCNN_SUM, self).__init__()
- # ECG processing Layers
- self.name = "MMCNN_SUM"
- self.conv1 = Conv1d_layer(ecg_channels, 16, 7)
- self.pool1 = nn.MaxPool1d(2, 2)
- self.conv2 = Conv1d_layer(16, 32, 5)
- self.pool2 = nn.MaxPool1d(2, 2)
- self.conv3 = Conv1d_layer(32, 48, 3)
- self.pool3 = nn.MaxPool1d(2, 2)
- self.fc0 = nn.Linear(5856, 512)
- self.fc1 = nn.Linear(512, 128)
- self.fc2 = nn.Linear(128, 5)
-
- # Clinical Notes Processing Layers
- self.fc_emb = nn.Linear(768, 128)
- self.norm = nn.LayerNorm(128)
-
- self.activation = nn.ReLU()
-
- def forward(self, x, notes):
- # ECG Processing
- x = self.pool1(self.activation(self.conv1(x)))
- x = self.pool2(self.activation(self.conv2(x)))
- x = self.pool3(self.activation(self.conv3(x)))
- x = x.view(x.size(0),-1)
- x = self.activation(self.fc0(x))
- x = self.activation(self.fc1(x))
-
- # Notes Processing
- notes = notes.view(notes.size(0),-1)
- notes = self.activation(self.fc_emb(notes))
-
- x = self.fc2(self.norm(x + notes))
- x = x.squeeze(1)
- return x
-
-class MMCNN_CAT(nn.Module):
- def __init__(self, ecg_channels=12):
- super(MMCNN_CAT, self).__init__()
- # ECG processing Layers
- self.name = "MMCNN_CAT"
- self.conv1 = nn.Conv1d(ecg_channels, 16, 7)
- self.pool1 = nn.MaxPool1d(2, 2)
- self.conv2 = nn.Conv1d(16, 32, 5)
- self.pool2 = nn.MaxPool1d(2, 2)
- self.conv3 = nn.Conv1d(32, 48, 3)
- self.pool3 = nn.MaxPool1d(2, 2)
- self.fc0 = nn.Linear(5856, 512)
- self.fc1 = nn.Linear(512, 128)
- self.fc2 = nn.Linear(256, 5)
-
- # Clinical Notes Processing Layers
- self.fc_emb = nn.Linear(768, 128)
- self.norm = nn.LayerNorm(128)
-
- self.activation = nn.ReLU()
-
- def forward(self, x, notes):
- # ECG Processing
- x = self.pool1(self.activation(self.conv1(x)))
- x = self.pool2(self.activation(self.conv2(x)))
- x = self.pool3(self.activation(self.conv3(x)))
- x = x.view(x.size(0),-1)
- x = self.activation(self.fc0(x))
- x = self.activation(self.fc1(x))
-
- # Notes Processing
- notes = notes.view(notes.size(0),-1)
- notes = self.activation(self.fc_emb(notes))
-
- x = self.fc2(torch.cat((x,notes),dim=1))
- x = x.squeeze(1)
- return x
-class MMCNN_ATT(nn.Module):
- def __init__(self, ecg_channels=12):
- super(MMCNN_ATT, self).__init__()
- # ECG processing Layers
- self.name = "MMCNN_ATT"
- self.conv1 = nn.Conv1d(ecg_channels, 16, 7)
- self.pool1 = nn.MaxPool1d(2, 2)
- self.conv2 = nn.Conv1d(16, 32, 5)
- self.pool2 = nn.MaxPool1d(2, 2)
- self.conv3 = nn.Conv1d(32, 48, 3)
- self.pool3 = nn.MaxPool1d(2, 2)
- self.fc0 = nn.Linear(5856, 512)
- self.fc1 = nn.Linear(512, 128)
- self.fc2 = nn.Linear(128, 5)
-
- # Clinical Notes Processing Layers
- self.fc_emb = nn.Linear(768, 128)
- self.norm1 = nn.LayerNorm(128)
- self.norm2 = nn.LayerNorm(128)
-
- self.attention = nn.MultiheadAttention(128, 8, batch_first=True)
- self.activation = nn.ReLU()
-
- def forward(self, x, notes):
- # ECG Processing
- x = self.pool1(self.activation(self.conv1(x)))
- x = self.pool2(self.activation(self.conv2(x)))
- x = self.pool3(self.activation(self.conv3(x)))
- x = x.view(x.size(0),-1)
- x = self.activation(self.fc0(x))
- x = self.activation(self.fc1(x))
- x = self.norm1(x)
-
- # Notes Processing
- notes = notes.view(notes.size(0),-1)
- notes = self.activation(self.fc_emb(notes))
- notes = self.norm2(notes)
- notes=notes.unsqueeze(1)
- x=x.unsqueeze(1)
- x,_= self.attention(notes, x, x)
- x = self.fc2(x)
- x = x.squeeze(1)
- return x
-
-class MMCNN_SUM_ATT(nn.Module):
- def __init__(self, ecg_channels=12):
- super(MMCNN_SUM_ATT, self).__init__()
- # ECG processing Layers
- self.name = "MMCNN_SUM_ATT"
- self.conv1 = nn.Conv1d(ecg_channels, 16, 7)
- self.pool1 = nn.MaxPool1d(2, 2)
- self.conv2 = nn.Conv1d(16, 32, 5)
- self.pool2 = nn.MaxPool1d(2, 2)
- self.conv3 = nn.Conv1d(32, 48, 3)
- self.pool3 = nn.MaxPool1d(2, 2)
- self.fc0 = nn.Linear(5856, 512)
- self.fc1 = nn.Linear(512, 128)
- self.fc2 = nn.Linear(128, 5)
-
- # Clinical Notes Processing Layers
- self.fc_emb = nn.Linear(768, 128)
- self.norm = nn.LayerNorm(128)
-
- self.attention = nn.MultiheadAttention(128, 8, batch_first=True)
- self.activation = nn.ReLU()
-
- def forward(self, x, notes):
- # ECG Processing
- x = self.pool1(self.activation(self.conv1(x)))
- x = self.pool2(self.activation(self.conv2(x)))
- x = self.pool3(self.activation(self.conv3(x)))
- x = x.view(x.size(0),-1)
- x = self.activation(self.fc0(x))
- x = self.activation(self.fc1(x))
-
- # Notes Processing
- notes = notes.view(notes.size(0),-1)
- notes = self.activation(self.fc_emb(notes))
- x = self.norm(x + notes)
-
- x=x.unsqueeze(1)
- # print(x.shape)
- x,_= self.attention(x, x, x)
-
- x = self.fc2(x)
- x = x.squeeze(1)
- return x
-
-if __name__ == "__main__":
- model = CNN()
- # model = Conv1d_layer(12, 16, 7)
- summary(model, input_size = (1, 12, 1000))
-
\ No newline at end of file
diff --git a/spaces/TencentARC/Caption-Anything/caption_anything/captioner/blip2.py b/spaces/TencentARC/Caption-Anything/caption_anything/captioner/blip2.py
deleted file mode 100644
index b3b27b5e71ea6c6bae7ef0df09cf3e1f035fb869..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/Caption-Anything/caption_anything/captioner/blip2.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import torch
-from PIL import Image
-import numpy as np
-from typing import Union
-from transformers import AutoProcessor, Blip2ForConditionalGeneration
-
-from caption_anything.utils.utils import is_platform_win, load_image
-from .base_captioner import BaseCaptioner
-import time
-
-class BLIP2Captioner(BaseCaptioner):
- def __init__(self, device, dialogue: bool = False, enable_filter: bool = False):
- super().__init__(device, enable_filter)
- self.device = device
- self.dialogue = dialogue
- self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
- self.processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
- if is_platform_win():
- self.model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="sequential", torch_dtype=self.torch_dtype)
- else:
- self.model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map='sequential', load_in_8bit=True)
-
- @torch.no_grad()
- def inference(self,
- image: Union[np.ndarray, Image.Image, str],
- filter=False,
- args={}):
- args['return_ppl'] = args.get('return_ppl', False)
- args['text_prompt'] = args.get('text_prompt', 'Question: what does the image show? Answer:')
- args['reference_caption'] = args.get('reference_caption', [])
-
- image = load_image(image, return_type="pil")
- result = {}
- if not self.dialogue:
- inputs = self.processor(image, text = args['text_prompt'], return_tensors="pt").to(self.device, self.torch_dtype)
- out = self.model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_new_tokens=50)
- caption = self.processor.decode(out.sequences[0], skip_special_tokens=True).strip()
- if self.enable_filter and filter:
- print('reference caption: {}, caption: {}'.format(args['reference_caption'], caption))
- clip_score = self.filter_caption(image, caption, args['reference_caption'])
- result['clip_score'] = clip_score
- if args['return_ppl']:
- ppl_score = torch.stack(out.scores, dim=1).softmax(dim=2).log().max(dim=2)[0].sum(dim=1)[0]
- result['ppl_score'] = ppl_score.item()
- print(f"\nProcessed ImageCaptioning by BLIP2Captioner, Output Text: {caption}")
- result['caption'] = caption
- return result
- else:
- context = []
- template = "Question: {} Answer: {}."
- while(True):
- input_texts = input()
- if input_texts == 'end':
- break
- prompt = " ".join([template.format(context[i][0], context[i][1]) for i in range(len(context))]) + " Question: " + input_texts + " Answer:"
- inputs = self.processor(image, text = prompt, return_tensors="pt").to(self.device, self.torch_dtype)
- out = self.model.generate(**inputs, max_new_tokens=50)
- captions = self.processor.decode(out[0], skip_special_tokens=True).strip()
- context.append((input_texts, captions))
- result['caption'] = captions
- return result
-
-if __name__ == '__main__':
-
- dialogue = False
- model = BLIP2Captioner(device='cuda:4', dialogue = dialogue, cache_dir = '/nvme-ssd/fjj/Caption-Anything/model_cache')
- image_path = 'test_images/img2.jpg'
- seg_mask = np.zeros((224,224))
- seg_mask[50:200, 50:200] = 1
- print(f'process image {image_path}')
- print(model.inference_seg(image_path, seg_mask))
diff --git a/spaces/Tetel/secondbing/claude.py b/spaces/Tetel/secondbing/claude.py
deleted file mode 100644
index 92b22bad3736820c002b2c86e89d7e77b6bdc21d..0000000000000000000000000000000000000000
--- a/spaces/Tetel/secondbing/claude.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import asyncio
-import json
-import os
-
-from slack_sdk.web.async_client import AsyncWebClient
-
-if os.path.exists("claude.json"):
- with open("claude.json") as f:
- try:
- claude_config = json.load(f)
- except json.JSONDecodeError:
- claude_config = {}
-else:
- claude_config = {}
-
-
-class Chatbot:
- def __init__(
- self,
- slack_user_token=claude_config.get("slackUserToken"),
- slack_channel_id=claude_config.get("slackChannelId"),
- claude_member_id=claude_config.get("claudeMemberId"),
- proxy=None,
- ):
- self.client = AsyncWebClient(token=slack_user_token, proxy=proxy)
- self.slack_channel_id = slack_channel_id
- self.claude_member_id = claude_member_id
-
- async def ask_stream(self, message):
- if len(message) < 3000: # Slack truncates message at ~3000 characters
- response = await self.client.chat_postMessage(channel=self.slack_channel_id, text=message)
- thread_ts = response["ts"]
- else:
- response = await self.client.chat_postMessage(channel=self.slack_channel_id, text=message[:3000])
- thread_ts = response["ts"]
- await self.client.chat_postMessage(
- channel=self.slack_channel_id,
- text=message[3000:],
- thread_ts=thread_ts,
- )
-
- await self.client.chat_postMessage(
- channel=self.slack_channel_id,
- text=f'<@{self.claude_member_id}> [assistant](#message)',
- thread_ts=thread_ts,
- )
-
- while True:
- await asyncio.sleep(1)
- replies_response = await self.client.conversations_replies(channel=self.slack_channel_id, ts=thread_ts)
- all_replies = replies_response["messages"]
- for reply in all_replies:
- if reply["user"] == self.claude_member_id:
- break
- else:
- continue
-
- if reply["text"].endswith("_Typing…_"):
- yield reply["text"][:-11]
- else:
- yield reply["text"]
- break
diff --git a/spaces/Tinki/text_generator/README.md b/spaces/Tinki/text_generator/README.md
deleted file mode 100644
index 685ffbcd7e8c521b70cd5dfc36e99acc4c988c1b..0000000000000000000000000000000000000000
--- a/spaces/Tinki/text_generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Generator
-emoji: 🌖
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Uday007/startup-profit-predictor/app.py b/spaces/Uday007/startup-profit-predictor/app.py
deleted file mode 100644
index ffc1c9a2b66b55a7b542ddf544850385f32f62f9..0000000000000000000000000000000000000000
--- a/spaces/Uday007/startup-profit-predictor/app.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import gradio as gr
-import pandas as pd
-import numpy as np
-from joblib import load
-
-def predict_profit(
- RandDSpend,Administration,MarketingSpend,State
-):
- model=load("startup.jb")
-
- # Create dict array from parameters
- data={
- "RandDSpend":[RandDSpend],
- "Administration":[Administration],
- "MarketingSpend":[MarketingSpend],
- "State":[State]
- }
-
- xin=pd.DataFrame(data)
- Profit=model.predict(xin)
- return Profit[0]
-
-ui=gr.Interface(
- fn=predict_profit,
- inputs=[
- gr.inputs.Textbox(placeholder="R&D Amount",numeric=True,label="R&D SPEND"),
- gr.inputs.Textbox(placeholder="Administration Amount",numeric=True,label="ADMINISTRATION"),
- gr.inputs.Textbox(placeholder="Marketing Amount",numeric=True,label="MARKETING SPEND"),
- gr.Dropdown(["New York","California","Florida"],label="STATE"),
- ],
-
- title="STARTUP PROFIT PREDICTOR",
- outputs="text",
- examples=[[165349.2,136897.8,471784.1,"New York"],
- [67532.53,105751.03,304768.73,"Florida"],
- [64664.71,139553.16,137962.62,"California"]]
-
-)
-
-if __name__=="__main__":
- ui.launch()
diff --git a/spaces/VickyKira/NASAGPT/client/js/chat.js b/spaces/VickyKira/NASAGPT/client/js/chat.js
deleted file mode 100644
index 8a4449e0fd94c629867d62f53f5467f8e8292ca7..0000000000000000000000000000000000000000
--- a/spaces/VickyKira/NASAGPT/client/js/chat.js
+++ /dev/null
@@ -1,508 +0,0 @@
-const query = (obj) =>
- Object.keys(obj)
- .map((k) => encodeURIComponent(k) + "=" + encodeURIComponent(obj[k]))
- .join("&");
-const url_prefix = document.querySelector("body").getAttribute("data-urlprefix");
-const markdown = window.markdownit();
-const message_box = document.getElementById(`messages`);
-const message_input = document.getElementById(`message-input`);
-const box_conversations = document.querySelector(`.top`);
-const spinner = box_conversations.querySelector(".spinner");
-const stop_generating = document.querySelector(`.stop-generating`);
-const send_button = document.querySelector(`#send-button`);
-const user_image = ` `;
-const gpt_image = ` `;
-let prompt_lock = false;
-
-hljs.addPlugin(new CopyButtonPlugin());
-
-message_input.addEventListener("blur", () => {
- window.scrollTo(0, 0);
-});
-
-message_input.addEventListener("focus", () => {
- document.documentElement.scrollTop = document.documentElement.scrollHeight;
-});
-
-const delete_conversations = async () => {
- localStorage.clear();
- await new_conversation();
-};
-
-const handle_ask = async () => {
- message_input.style.height = `80px`;
- window.scrollTo(0, 0);
- let message = message_input.value;
-
- if (message.length > 0) {
- message_input.value = ``;
- message_input.dispatchEvent(new Event("input"));
- await ask_gpt(message);
- }
-};
-
-const remove_cancel_button = async () => {
- stop_generating.classList.add(`stop-generating-hiding`);
-
- setTimeout(() => {
- stop_generating.classList.remove(`stop-generating-hiding`);
- stop_generating.classList.add(`stop-generating-hidden`);
- }, 300);
-};
-
-const ask_gpt = async (message) => {
- try {
- message_input.value = ``;
- message_input.innerHTML = ``;
- message_input.innerText = ``;
-
- add_conversation(window.conversation_id, message.substr(0, 16));
- window.scrollTo(0, 0);
- window.controller = new AbortController();
-
- jailbreak = document.getElementById("jailbreak");
- model = document.getElementById("model");
- prompt_lock = true;
- window.text = ``;
- window.token = message_id();
-
- stop_generating.classList.remove(`stop-generating-hidden`);
-
- add_user_message_box(message);
-
- message_box.scrollTop = message_box.scrollHeight;
- window.scrollTo(0, 0);
- await new Promise((r) => setTimeout(r, 500));
- window.scrollTo(0, 0);
-
- message_box.innerHTML += `
-
- `;
-
- message_box.scrollTop = message_box.scrollHeight;
- window.scrollTo(0, 0);
- await new Promise((r) => setTimeout(r, 1000));
- window.scrollTo(0, 0);
-
- const response = await fetch(`${url_prefix}/backend-api/v2/conversation`, {
- method: `POST`,
- signal: window.controller.signal,
- headers: {
- "content-type": `application/json`,
- accept: `text/event-stream`,
- },
- body: JSON.stringify({
- conversation_id: window.conversation_id,
- action: `_ask`,
- model: model.options[model.selectedIndex].value,
- jailbreak: jailbreak.options[jailbreak.selectedIndex].value,
- meta: {
- id: window.token,
- content: {
- conversation: await get_conversation(window.conversation_id),
- internet_access: document.getElementById("switch").checked,
- content_type: "text",
- parts: [
- {
- content: message,
- role: "user",
- },
- ],
- },
- },
- }),
- });
-
- const reader = response.body.getReader();
-
- while (true) {
- const { value, done } = await reader.read();
- if (done) break;
-
- chunk = decodeUnicode(new TextDecoder().decode(value));
-
- if (
- chunk.includes(` {
- const messageDiv = createElement("div", { classNames: ["message"] });
- const avatarContainer = createElement("div", { classNames: ["avatar-container"], innerHTML: user_image });
- const contentDiv = createElement("div", {
- classNames: ["content"],
- id: `user_${token}`,
- textContent: message,
- });
-
- messageDiv.append(avatarContainer, contentDiv);
- message_box.appendChild(messageDiv);
-};
-
-const decodeUnicode = (str) => {
- return str.replace(/\\u([a-fA-F0-9]{4})/g, function (match, grp) {
- return String.fromCharCode(parseInt(grp, 16));
- });
-};
-
-const clear_conversations = async () => {
- const elements = box_conversations.childNodes;
- let index = elements.length;
-
- if (index > 0) {
- while (index--) {
- const element = elements[index];
- if (element.nodeType === Node.ELEMENT_NODE && element.tagName.toLowerCase() !== `button`) {
- box_conversations.removeChild(element);
- }
- }
- }
-};
-
-const clear_conversation = async () => {
- let messages = message_box.getElementsByTagName(`div`);
-
- while (messages.length > 0) {
- message_box.removeChild(messages[0]);
- }
-};
-
-const delete_conversation = async (conversation_id) => {
- localStorage.removeItem(`conversation:${conversation_id}`);
-
- if (window.conversation_id == conversation_id) {
- await new_conversation();
- }
-
- await load_conversations(20, 0, true);
-};
-
-const set_conversation = async (conversation_id) => {
- history.pushState({}, null, `${url_prefix}/chat/${conversation_id}`);
- window.conversation_id = conversation_id;
-
- await clear_conversation();
- await load_conversation(conversation_id);
- await load_conversations(20, 0, true);
-};
-
-const new_conversation = async () => {
- history.pushState({}, null, `${url_prefix}/chat/`);
- window.conversation_id = uuid();
-
- await clear_conversation();
- await load_conversations(20, 0, true);
-};
-
-const load_conversation = async (conversation_id) => {
- let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
- console.log(conversation, conversation_id);
-
- for (item of conversation.items) {
- if (is_assistant(item.role)) {
- message_box.innerHTML += load_gpt_message_box(item.content);
- } else {
- message_box.innerHTML += load_user_message_box(item.content);
- }
- }
-
- document.querySelectorAll(`code`).forEach((el) => {
- hljs.highlightElement(el);
- });
-
- message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" });
-
- setTimeout(() => {
- message_box.scrollTop = message_box.scrollHeight;
- }, 500);
-};
-
-const load_user_message_box = (content) => {
- const messageDiv = createElement("div", { classNames: ["message"] });
- const avatarContainer = createElement("div", { classNames: ["avatar-container"], innerHTML: user_image });
- const contentDiv = createElement("div", { classNames: ["content"] });
- const preElement = document.createElement("pre");
- preElement.textContent = content;
- contentDiv.appendChild(preElement);
-
- messageDiv.append(avatarContainer, contentDiv);
-
- return messageDiv.outerHTML;
-};
-
-const load_gpt_message_box = (content) => {
- return `
-
-
- ${gpt_image}
-
-
- ${markdown.render(content)}
-
-
- `;
-};
-
-const is_assistant = (role) => {
- return role == "assistant";
-};
-
-const get_conversation = async (conversation_id) => {
- let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
- return conversation.items;
-};
-
-const add_conversation = async (conversation_id, title) => {
- if (localStorage.getItem(`conversation:${conversation_id}`) == null) {
- localStorage.setItem(
- `conversation:${conversation_id}`,
- JSON.stringify({
- id: conversation_id,
- title: title,
- items: [],
- })
- );
- }
-};
-
-const add_message = async (conversation_id, role, content) => {
- before_adding = JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
-
- before_adding.items.push({
- role: role,
- content: content,
- });
-
- localStorage.setItem(`conversation:${conversation_id}`, JSON.stringify(before_adding)); // update conversation
-};
-
-const load_conversations = async (limit, offset, loader) => {
- //console.log(loader);
- //if (loader === undefined) box_conversations.appendChild(spinner);
-
- let conversations = [];
- for (let i = 0; i < localStorage.length; i++) {
- if (localStorage.key(i).startsWith("conversation:")) {
- let conversation = localStorage.getItem(localStorage.key(i));
- conversations.push(JSON.parse(conversation));
- }
- }
-
- //if (loader === undefined) spinner.parentNode.removeChild(spinner)
- await clear_conversations();
-
- for (conversation of conversations) {
- box_conversations.innerHTML += `
-
- `;
- }
-
- document.querySelectorAll(`code`).forEach((el) => {
- hljs.highlightElement(el);
- });
-};
-
-document.getElementById(`cancelButton`).addEventListener(`click`, async () => {
- window.controller.abort();
- console.log(`aborted ${window.conversation_id}`);
-});
-
-function h2a(str1) {
- var hex = str1.toString();
- var str = "";
-
- for (var n = 0; n < hex.length; n += 2) {
- str += String.fromCharCode(parseInt(hex.substr(n, 2), 16));
- }
-
- return str;
-}
-
-const uuid = () => {
- return `xxxxxxxx-xxxx-4xxx-yxxx-${Date.now().toString(16)}`.replace(/[xy]/g, function (c) {
- var r = (Math.random() * 16) | 0,
- v = c == "x" ? r : (r & 0x3) | 0x8;
- return v.toString(16);
- });
-};
-
-const message_id = () => {
- random_bytes = (Math.floor(Math.random() * 1338377565) + 2956589730).toString(2);
- unix = Math.floor(Date.now() / 1000).toString(2);
-
- return BigInt(`0b${unix}${random_bytes}`).toString();
-};
-
-window.onload = async () => {
- load_settings_localstorage();
-
- conversations = 0;
- for (let i = 0; i < localStorage.length; i++) {
- if (localStorage.key(i).startsWith("conversation:")) {
- conversations += 1;
- }
- }
-
- if (conversations == 0) localStorage.clear();
-
- await setTimeout(() => {
- load_conversations(20, 0);
- }, 1);
-
- if (!window.location.href.endsWith(`#`)) {
- if (/\/chat\/.+/.test(window.location.href.slice(url_prefix.length))) {
- await load_conversation(window.conversation_id);
- }
- }
-
- message_input.addEventListener("keydown", async (evt) => {
- if (prompt_lock) return;
-
- if (evt.key === "Enter" && !evt.shiftKey) {
- evt.preventDefault();
- await handle_ask();
- }
- });
-
- send_button.addEventListener("click", async (event) => {
- event.preventDefault();
- if (prompt_lock) return;
- message_input.blur();
- await handle_ask();
- });
-
- register_settings_localstorage();
-};
-
-const register_settings_localstorage = async () => {
- settings_ids = ["switch", "model", "jailbreak"];
- settings_elements = settings_ids.map((id) => document.getElementById(id));
- settings_elements.map((element) =>
- element.addEventListener(`change`, async (event) => {
- switch (event.target.type) {
- case "checkbox":
- localStorage.setItem(event.target.id, event.target.checked);
- break;
- case "select-one":
- localStorage.setItem(event.target.id, event.target.selectedIndex);
- break;
- default:
- console.warn("Unresolved element type");
- }
- })
- );
-};
-
-const load_settings_localstorage = async () => {
- settings_ids = ["switch", "model", "jailbreak"];
- settings_elements = settings_ids.map((id) => document.getElementById(id));
- settings_elements.map((element) => {
- if (localStorage.getItem(element.id)) {
- switch (element.type) {
- case "checkbox":
- element.checked = localStorage.getItem(element.id) === "true";
- break;
- case "select-one":
- element.selectedIndex = parseInt(localStorage.getItem(element.id));
- break;
- default:
- console.warn("Unresolved element type");
- }
- }
- });
-};
-
-function clearTextarea(textarea) {
- textarea.style.removeProperty("height");
- textarea.style.height = `${textarea.scrollHeight + 4}px`;
- if (textarea.value.trim() === "" && textarea.value.includes("\n")) {
- textarea.value = "";
- }
-}
-
-function createElement(tag, { classNames, id, innerHTML, textContent } = {}) {
- const el = document.createElement(tag);
- if (classNames) {
- el.classList.add(...classNames);
- }
- if (id) {
- el.id = id;
- }
- if (innerHTML) {
- el.innerHTML = innerHTML;
- }
- if (textContent) {
- const preElement = document.createElement("pre");
- preElement.textContent = textContent;
- el.appendChild(preElement);
- }
- return el;
-}
diff --git a/spaces/Willow123/InternLM-XComposer/demo_asset/download.py b/spaces/Willow123/InternLM-XComposer/demo_asset/download.py
deleted file mode 100644
index 4afec2f7c2836532a6974d102c7d7aa6ce1cfe60..0000000000000000000000000000000000000000
--- a/spaces/Willow123/InternLM-XComposer/demo_asset/download.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import re
-import json
-import requests
-import urllib.request
-from multiprocessing.pool import ThreadPool
-
-
-def download_image(url, path):
- if url == '':
- print('url is empty')
- return False
-
- try:
- urllib.request.urlopen(url)
- urllib.request.urlretrieve(url, path)
- return True
- except urllib.error.URLError as e:
- if hasattr(e, "code"):
- print(e.code)
- if hasattr(e, "reason"):
- print(e.reason)
- print(f"{url} download failed")
- return False
-
-
-def download_image_thread(url_list, folder, index, num_processes, Async=True):
- pool = ThreadPool(processes=num_processes)
- thread_list = []
- os.makedirs(folder, exist_ok=True)
- for i in range(len(url_list)):
- path = os.path.join(folder, f'temp_{index}_{i}.png')
- if Async:
- out = pool.apply_async(func=download_image, args=(url_list[i], path))
- else:
- out = pool.apply(func=download_image, args=(url_list[i], path))
- thread_list.append(out)
-
- pool.close()
- pool.join()
-
-
diff --git a/spaces/Y-T-G/Blur-Anything/utils/blur.py b/spaces/Y-T-G/Blur-Anything/utils/blur.py
deleted file mode 100644
index e76c83a61e4031a393d0112c46611642cf24600b..0000000000000000000000000000000000000000
--- a/spaces/Y-T-G/Blur-Anything/utils/blur.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import os
-import cv2
-import numpy as np
-
-
-# resize frames
-def resize_frames(frames, size=None):
- """
- size: (w, h)
- """
- if size is not None:
- frames = [cv2.resize(f, size) for f in frames]
- frames = np.stack(frames, 0)
-
- return frames
-
-
-# resize frames
-def resize_masks(masks, size=None):
- """
- size: (w, h)
- """
- if size is not None:
- masks = [np.expand_dims(cv2.resize(m, size), 2) for m in masks]
- masks = np.stack(masks, 0)
-
- return masks
-
-
-# apply gaussian blur to mask with defined strength
-def apply_blur(frame, strength):
- blurred = cv2.GaussianBlur(frame, (strength, strength), 0)
- return blurred
-
-
-# blur frames
-def blur_frames_and_write(
- frames, masks, ratio, strength, dilate_radius=15, fps=30, output_path="blurred.mp4"
-):
- assert frames.shape[:3] == masks.shape, "different size between frames and masks"
- assert ratio > 0 and ratio <= 1, "ratio must in (0, 1]"
-
- # --------------------
- # pre-processing
- # --------------------
- masks = masks.copy()
- masks = np.clip(masks, 0, 1)
- kernel = cv2.getStructuringElement(2, (dilate_radius, dilate_radius))
- masks = np.stack([cv2.dilate(mask, kernel) for mask in masks], 0)
- T, H, W = masks.shape
- masks = np.expand_dims(masks, axis=3) # expand to T, H, W, 1
- # size: (w, h)
- if ratio == 1:
- size = (W, H)
- binary_masks = masks
- else:
- size = [int(W * ratio), int(H * ratio)]
- size = [
- si + 1 if si % 2 > 0 else si for si in size
- ] # only consider even values
- # shortest side should be larger than 50
- if min(size) < 50:
- ratio = 50.0 / min(H, W)
- size = [int(W * ratio), int(H * ratio)]
- binary_masks = resize_masks(masks, tuple(size))
- frames = resize_frames(frames, tuple(size)) # T, H, W, 3
-
- if not os.path.exists(os.path.dirname(output_path)):
- os.makedirs(os.path.dirname(output_path))
- writer = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, size)
-
- for frame, mask in zip(frames, binary_masks):
- blurred_frame = apply_blur(frame, strength)
- masked = cv2.bitwise_or(blurred_frame, blurred_frame, mask=mask)
- processed = np.where(masked == (0, 0, 0), frame, masked)
-
- writer.write(processed[:, :, ::-1])
-
- writer.release()
-
- return output_path
diff --git a/spaces/YUANAI/DiffspeechResearch/inference/tts/ps_flow.py b/spaces/YUANAI/DiffspeechResearch/inference/tts/ps_flow.py
deleted file mode 100644
index 59446cac4743d6526988de4777919f6750c2d820..0000000000000000000000000000000000000000
--- a/spaces/YUANAI/DiffspeechResearch/inference/tts/ps_flow.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import torch
-from inference.tts.base_tts_infer import BaseTTSInfer
-from modules.tts.portaspeech.portaspeech_flow import PortaSpeechFlow
-from utils.commons.ckpt_utils import load_ckpt
-from utils.commons.hparams import hparams
-
-
-class PortaSpeechFlowInfer(BaseTTSInfer):
- def build_model(self):
- ph_dict_size = len(self.ph_encoder)
- word_dict_size = len(self.word_encoder)
- model = PortaSpeechFlow(ph_dict_size, word_dict_size, self.hparams)
- load_ckpt(model, hparams['work_dir'], 'model')
- with torch.no_grad():
- model.store_inverse_all()
- model.eval()
- return model
-
- def forward_model(self, inp):
- sample = self.input_to_batch(inp)
- with torch.no_grad():
- output = self.model(
- sample['txt_tokens'],
- sample['word_tokens'],
- ph2word=sample['ph2word'],
- word_len=sample['word_lengths'].max(),
- infer=True,
- forward_post_glow=True,
- spk_id=sample.get('spk_ids')
- )
- mel_out = output['mel_out']
- wav_out = self.run_vocoder(mel_out)
- wav_out = wav_out.cpu().numpy()
- return wav_out[0]
-
-
-if __name__ == '__main__':
- PortaSpeechFlowInfer.example_run()
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/custom_dataset_dataloader.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/custom_dataset_dataloader.py
deleted file mode 100644
index ea9c4172f838d130df297bed9c0755669720c39d..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/data/custom_dataset_dataloader.py
+++ /dev/null
@@ -1,250 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/data/custom_dataset_dataloader.py
-import operator
-import torch
-import torch.utils.data
-from detectron2.utils.comm import get_world_size
-
-from detectron2.config import configurable
-from torch.utils.data.sampler import BatchSampler, Sampler
-from detectron2.data.common import DatasetFromList, MapDataset
-from detectron2.data.dataset_mapper import DatasetMapper
-from detectron2.data.build import get_detection_dataset_dicts, build_batch_data_loader
-from detectron2.data.samplers import TrainingSampler
-from detectron2.data.build import worker_init_reset_seed, print_instances_class_histogram
-from detectron2.data.build import filter_images_with_only_crowd_annotations
-from detectron2.data.build import filter_images_with_few_keypoints
-from detectron2.data.build import check_metadata_consistency
-from detectron2.data.catalog import MetadataCatalog, DatasetCatalog
-from detectron2.utils import comm
-import itertools
-from typing import Optional
-
-
-def _custom_train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None):
- sampler_name = cfg.DATALOADER.SAMPLER_TRAIN
- if 'MultiDataset' in sampler_name:
- dataset_dicts = get_detection_dataset_dicts_with_source(
- cfg.DATASETS.TRAIN,
- filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
- min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
- if cfg.MODEL.KEYPOINT_ON else 0,
- proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
- )
- else:
- dataset_dicts = get_detection_dataset_dicts(
- cfg.DATASETS.TRAIN,
- filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
- min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
- if cfg.MODEL.KEYPOINT_ON else 0,
- proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
- )
-
- if mapper is None:
- mapper = DatasetMapper(cfg, True)
-
- if sampler is not None:
- pass
- elif sampler_name == "TrainingSampler":
- sampler = TrainingSampler(len(dataset))
- elif sampler_name == "MultiDatasetSampler":
- sampler = MultiDatasetSampler(
- dataset_dicts,
- dataset_ratio=cfg.DATALOADER.DATASET_RATIO,
- )
- else:
- raise ValueError("Unknown training sampler: {}".format(sampler_name))
-
- return {
- "dataset": dataset_dicts,
- "sampler": sampler,
- "mapper": mapper,
- "total_batch_size": cfg.SOLVER.IMS_PER_BATCH,
- "num_workers": cfg.DATALOADER.NUM_WORKERS,
- 'dataset_bs': cfg.DATALOADER.DATASET_BS,
- 'num_datasets': len(cfg.DATASETS.TRAIN)
- }
-
-
-@configurable(from_config=_custom_train_loader_from_config)
-def build_custom_train_loader(
- dataset, *, mapper, sampler,
- total_batch_size=16,
- num_workers=0,
- num_datasets=1,
- dataset_bs=1
-):
-
- if isinstance(dataset, list):
- dataset = DatasetFromList(dataset, copy=False)
- if mapper is not None:
- dataset = MapDataset(dataset, mapper)
- if sampler is None:
- sampler = TrainingSampler(len(dataset))
- assert isinstance(sampler, torch.utils.data.sampler.Sampler)
-
- return build_dataset_batch_data_loader(
- dataset_bs,
- dataset,
- sampler,
- total_batch_size,
- num_datasets=num_datasets,
- num_workers=num_workers,
- )
-
-
-def build_dataset_batch_data_loader(
- dataset_bs, dataset, sampler, total_batch_size, num_datasets, num_workers=0
-):
-
- world_size = get_world_size()
- assert (
- total_batch_size > 0 and total_batch_size % world_size == 0
- ), "Total batch size ({}) must be divisible by the number of gpus ({}).".format(
- total_batch_size, world_size
- )
-
- data_loader = torch.utils.data.DataLoader(
- dataset,
- sampler=sampler,
- num_workers=num_workers,
- batch_sampler=None,
- collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements
- worker_init_fn=worker_init_reset_seed,
- )
-
- if num_datasets > 1:
- return MultiDatasets(data_loader, dataset_bs, num_datasets)
- else:
- return SingleDataset(data_loader, dataset_bs)
-
-
-def get_detection_dataset_dicts_with_source(
- dataset_names, filter_empty=True, min_keypoints=0, proposal_files=None
-):
- assert len(dataset_names)
- dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names]
- for dataset_name, dicts in zip(dataset_names, dataset_dicts):
- assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
-
- for source_id, (dataset_name, dicts) in \
- enumerate(zip(dataset_names, dataset_dicts)):
- assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
- for d in dicts:
- d['dataset_source'] = source_id
-
- if "annotations" in dicts[0]:
- try:
- class_names = MetadataCatalog.get(dataset_name).thing_classes
- check_metadata_consistency("thing_classes", dataset_name)
- print_instances_class_histogram(dicts, class_names)
- except AttributeError: # class names are not available for this dataset
- pass
-
- assert proposal_files is None
-
- dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts))
-
- has_instances = "annotations" in dataset_dicts[0]
- if filter_empty and has_instances:
- dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts)
- if min_keypoints > 0 and has_instances:
- dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints)
-
- return dataset_dicts
-
-
-class MultiDatasetSampler(Sampler):
- def __init__(
- self,
- dataset_dicts,
- dataset_ratio,
- seed: Optional[int] = None,
- ):
- sizes = [0 for _ in range(len(dataset_ratio))]
- for d in dataset_dicts:
- sizes[d['dataset_source']] += 1
- print('dataset sizes', sizes)
- self.sizes = sizes
- assert len(dataset_ratio) == len(sizes), \
- 'length of dataset ratio {} should be equal to number if dataset {}'.format(
- len(dataset_ratio), len(sizes)
- )
- if seed is None:
- seed = comm.shared_random_seed()
- self._seed = int(seed)
- self._rank = comm.get_rank()
- self._world_size = comm.get_world_size()
-
- self.dataset_ids = torch.tensor(
- [d['dataset_source'] for d in dataset_dicts], dtype=torch.long)
- self.dataset_ratio = dataset_ratio
-
- dataset_weight = [torch.ones(s) * max(sizes) / s * r / sum(dataset_ratio) \
- for i, (r, s) in enumerate(zip(dataset_ratio, sizes))]
- dataset_weight = torch.cat(dataset_weight)
-
- self.weights = dataset_weight
- self.sample_epoch_size = len(self.weights)
-
- def __iter__(self):
- start = self._rank
- yield from itertools.islice(
- self._infinite_indices(), start, None, self._world_size)
-
- def _infinite_indices(self):
- g = torch.Generator()
- g.manual_seed(self._seed)
- while True:
- if len(self.dataset_ratio) > 1:
- # multiple datasets
- ids = torch.multinomial(
- self.weights, self.sample_epoch_size, generator=g,
- replacement=True)
- nums = [(self.dataset_ids[ids] == i).sum().int().item() \
- for i in range(len(self.sizes))]
- yield from ids
- else:
- # single dataset
- yield from torch.randperm(self.sizes[0], generator=g).tolist()
-
-
-class SingleDataset(torch.utils.data.IterableDataset):
- def __init__(self, dataset, batch_sizes):
- self.dataset = dataset
- self.batch_sizes = batch_sizes
- self._buckets = [[] for _ in range(2)]
-
- def __iter__(self):
- for d in self.dataset:
- w, h = d["width"], d["height"]
- aspect_ratio_bucket_id = 0 if w > h else 1
- bucket_id = aspect_ratio_bucket_id
- bucket = self._buckets[bucket_id]
- bucket.append(d)
- if len(bucket) == self.batch_sizes:
- yield bucket[:]
- del bucket[:]
-
-
-class MultiDatasets(torch.utils.data.IterableDataset):
- def __init__(self, dataset, batch_sizes, num_datasets):
- self.dataset = dataset
- self.batch_sizes = batch_sizes
- self._buckets = [[] for _ in range(2 * num_datasets)]
- self.iter_idx = 0
- self.num_datasets = num_datasets
-
- def __iter__(self):
- for d in self.dataset:
- w, h = d["width"], d["height"]
- aspect_ratio_bucket_id = 0 if w > h else 1
- bucket_id = d['dataset_source'] * 2 + aspect_ratio_bucket_id
- bucket = self._buckets[bucket_id]
- if len(bucket) < self.batch_sizes:
- bucket.append(d)
- selected_dataset = self.iter_idx % self.num_datasets
- if len(bucket) == self.batch_sizes and selected_dataset == d['dataset_source']:
- self.iter_idx += 1
- yield bucket[:]
- del bucket[:]
\ No newline at end of file
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/evaluation/eval.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/evaluation/eval.py
deleted file mode 100644
index 951a0920ec3d93703245562d4f76ec597e672ad9..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/evaluation/eval.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import itertools
-import json
-import os
-from detectron2.structures import Boxes, BoxMode, pairwise_iou
-from detectron2.utils.file_io import PathManager
-import numpy as np
-import pycocotools.mask as mask_util
-from detectron2.evaluation.coco_evaluation import COCOEvaluator
-from detectron2.evaluation.coco_evaluation import _evaluate_predictions_on_coco
-
-
-class GRiTCOCOEvaluator(COCOEvaluator):
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
-
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def _eval_predictions(self, predictions, img_ids=None):
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
- tasks = self._tasks or self._tasks_from_predictions(coco_results)
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info(
- "Evaluating predictions with {} COCO API...".format(
- "unofficial" if self._use_fast_impl else "official"
- )
- )
-
- coco_results = self.convert_classname_to_id(coco_results)
-
- for task in sorted(tasks):
- assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!"
- coco_eval = (
- _evaluate_predictions_on_coco(
- self._coco_api,
- coco_results,
- task,
- kpt_oks_sigmas=self._kpt_oks_sigmas,
- use_fast_impl=self._use_fast_impl,
- img_ids=img_ids,
- max_dets_per_image=self._max_dets_per_image,
- )
- if len(coco_results) > 0
- else None # cocoapi does not handle empty results very well
- )
-
- res = self._derive_coco_results(
- coco_eval, task, class_names=self._metadata.get("thing_classes")
- )
- self._results[task] = res
-
- def convert_classname_to_id(self, results):
- outputs = []
- class_name_to_id = {}
- categories = sorted(self._coco_api.dataset['categories'], key=lambda x: x['id'])
-
- for cat in categories:
- class_name_to_id[cat['name']] = cat['id']
-
- for pred in results:
- if pred['object_descriptions'] in class_name_to_id:
- pred['category_id'] = class_name_to_id[pred['object_descriptions']]
- del pred['object_descriptions']
- outputs.append(pred)
-
- return outputs
-
-
-class GRiTVGEvaluator(COCOEvaluator):
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- assert input["image_id"] == int(input['file_name'].split('/')[-1].split('.')[0])
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"], output_logits=True)
- h = input['height']
- w = input['width']
- scale = 720.0 / max(h, w)
- scaled_inst = []
- for inst in prediction["instances"]:
- inst['bbox'][0] = inst['bbox'][0] * scale
- inst['bbox'][1] = inst['bbox'][1] * scale
- inst['bbox'][2] = inst['bbox'][2] * scale
- inst['bbox'][3] = inst['bbox'][3] * scale
- scaled_inst.append(inst)
- if len(scaled_inst) > 0:
- prediction["instances"] = scaled_inst
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def _eval_predictions(self, predictions, img_ids=None):
- '''
- This is only for saving the results to json file
- '''
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "vg_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
-
-def instances_to_coco_json(instances, img_id, output_logits=False):
- """
- Add object_descriptions and logit (if applicable) to
- detectron2's instances_to_coco_json
- """
- num_instance = len(instances)
- if num_instance == 0:
- return []
-
- boxes = instances.pred_boxes.tensor.numpy()
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- boxes = boxes.tolist()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
- object_descriptions = instances.pred_object_descriptions.data
- if output_logits:
- logits = instances.logits.tolist()
-
- results = []
- for k in range(num_instance):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": boxes[k],
- "score": scores[k],
- 'object_descriptions': object_descriptions[k],
- }
- if output_logits:
- result["logit"] = logits[k]
-
- results.append(result)
- return results
\ No newline at end of file
diff --git a/spaces/YuanMio/vits-uma-genshin-honkai/mel_processing.py b/spaces/YuanMio/vits-uma-genshin-honkai/mel_processing.py
deleted file mode 100644
index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000
--- a/spaces/YuanMio/vits-uma-genshin-honkai/mel_processing.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/data/__init__.py b/spaces/Yudha515/Rvc-Models/audiocraft/data/__init__.py
deleted file mode 100644
index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/audiocraft/data/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import audio, audio_dataset
diff --git a/spaces/Yuliang/ECON/lib/common/train_util.py b/spaces/Yuliang/ECON/lib/common/train_util.py
deleted file mode 100644
index 324547c05fd7281381262f16d6240d1b9f2240da..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/common/train_util.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
-# holder of all proprietary rights on this computer program.
-# You can only use this computer program if you have closed
-# a license agreement with MPG or you get the right to use the computer
-# program from someone who is authorized to grant you that right.
-# Any use of the computer program without a valid license is prohibited and
-# liable to prosecution.
-#
-# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
-# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
-# for Intelligent Systems. All rights reserved.
-#
-# Contact: ps-license@tuebingen.mpg.de
-
-import pytorch_lightning as pl
-import torch
-from termcolor import colored
-
-from ..dataset.mesh_util import *
-from ..net.geometry import orthogonal
-
-
-class Format:
- end = '\033[0m'
- start = '\033[4m'
-
-
-def init_loss():
-
- losses = {
- # Cloth: chamfer distance
- "cloth": {"weight": 1e3, "value": 0.0},
- # Stiffness: [RT]_v1 - [RT]_v2 (v1-edge-v2)
- "stiff": {"weight": 1e5, "value": 0.0},
- # Cloth: det(R) = 1
- "rigid": {"weight": 1e5, "value": 0.0},
- # Cloth: edge length
- "edge": {"weight": 0, "value": 0.0},
- # Cloth: normal consistency
- "nc": {"weight": 0, "value": 0.0},
- # Cloth: laplacian smoonth
- "lapla": {"weight": 1e2, "value": 0.0},
- # Body: Normal_pred - Normal_smpl
- "normal": {"weight": 1e0, "value": 0.0},
- # Body: Silhouette_pred - Silhouette_smpl
- "silhouette": {"weight": 1e0, "value": 0.0},
- # Joint: reprojected joints difference
- "joint": {"weight": 5e0, "value": 0.0},
- }
-
- return losses
-
-
-class SubTrainer(pl.Trainer):
- def save_checkpoint(self, filepath, weights_only=False):
- """Save model/training states as a checkpoint file through state-dump and file-write.
- Args:
- filepath: write-target file's path
- weights_only: saving model weights only
- """
- _checkpoint = self._checkpoint_connector.dump_checkpoint(weights_only)
-
- del_keys = []
- for key in _checkpoint["state_dict"].keys():
- for ignore_key in ["normal_filter", "voxelization", "reconEngine"]:
- if ignore_key in key:
- del_keys.append(key)
- for key in del_keys:
- del _checkpoint["state_dict"][key]
-
- pl.utilities.cloud_io.atomic_save(_checkpoint, filepath)
-
-
-def query_func(opt, netG, features, points, proj_matrix=None):
- """
- - points: size of (bz, N, 3)
- - proj_matrix: size of (bz, 4, 4)
- return: size of (bz, 1, N)
- """
- assert len(points) == 1
- samples = points.repeat(opt.num_views, 1, 1)
- samples = samples.permute(0, 2, 1) # [bz, 3, N]
-
- # view specific query
- if proj_matrix is not None:
- samples = orthogonal(samples, proj_matrix)
-
- calib_tensor = torch.stack([torch.eye(4).float()], dim=0).type_as(samples)
-
- preds = netG.query(
- features=features,
- points=samples,
- calibs=calib_tensor,
- regressor=netG.if_regressor,
- )
-
- if type(preds) is list:
- preds = preds[0]
-
- return preds
-
-
-def query_func_IF(batch, netG, points):
- """
- - points: size of (bz, N, 3)
- return: size of (bz, 1, N)
- """
-
- batch["samples_geo"] = points
- batch["calib"] = torch.stack([torch.eye(4).float()], dim=0).type_as(points)
-
- preds = netG(batch)
-
- return preds.unsqueeze(1)
-
-
-def batch_mean(res, key):
- return torch.stack([
- x[key] if torch.is_tensor(x[key]) else torch.as_tensor(x[key]) for x in res
- ]).mean()
-
-
-def accumulate(outputs, rot_num, split):
-
- hparam_log_dict = {}
-
- metrics = outputs[0].keys()
- datasets = split.keys()
-
- for dataset in datasets:
- for metric in metrics:
- keyword = f"{dataset}/{metric}"
- if keyword not in hparam_log_dict.keys():
- hparam_log_dict[keyword] = 0
- for idx in range(split[dataset][0] * rot_num, split[dataset][1] * rot_num):
- hparam_log_dict[keyword] += outputs[idx][metric].item()
- hparam_log_dict[keyword] /= (split[dataset][1] - split[dataset][0]) * rot_num
-
- print(colored(hparam_log_dict, "green"))
-
- return hparam_log_dict
diff --git a/spaces/abdvl/datahub_qa_bot/docs/advanced/db-retention.md b/spaces/abdvl/datahub_qa_bot/docs/advanced/db-retention.md
deleted file mode 100644
index 91c7e2f0d3adc9b7bb28d19420a815bf103e2fa9..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/advanced/db-retention.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# Configuring Database Retention
-
-## Goal
-
-DataHub stores different versions of [metadata aspects](https://datahubproject.io/docs/what/aspect) as they are ingested
-using a database (or key-value store). These multiple versions allow us to look at an aspect's historical changes and
-rollback to a previous version if incorrect metadata is ingested. However, every stored version takes additional storage
-space, while possibly bringing less value to the system. We need to be able to impose a **retention** policy on these
-records to keep the size of the DB in check.
-
-Goal of the retention system is to be able to **configure and enforce retention policies** on documents at each of these
-various levels:
- - global
- - entity-level
- - aspect-level
-
-## What type of retention policies are supported?
-
-We support 3 types of retention policies for aspects:
-
-| Policy | Versions Kept |
-|:-------------:|:-----------------------------------:|
-| Indefinite | All versions |
-| Version-based | Latest *N* versions |
-| Time-based | Versions ingested in last *N* seconds |
-
-**Note:** The latest version (version 0) is never deleted. This ensures core functionality of DataHub is not impacted while applying retention.
-
-## When is the retention policy applied?
-
-As of now, retention policies are applied in two places:
-
-1. **GMS boot-up**: A bootstrap step ingests the predefined set of retention policies. If no policy existed before or the existing policy
- was updated, an asynchronous call will be triggered. It will apply the retention policy (or policies) to **all** records in the database.
-2. **Ingest**: On every ingest, if an existing aspect got updated, it applies the retention policy to the urn-aspect pair being ingested.
-
-We are planning to support a cron-based application of retention in the near future to ensure that the time-based retention is applied correctly.
-
-## How to configure?
-
-For the initial iteration, we have made this feature opt-in. Please set **ENTITY_SERVICE_ENABLE_RETENTION=true** when
-creating the datahub-gms container/k8s pod.
-
-On GMS start up, retention policies are initialized with:
-1. First, the default provided **version-based** retention to keep **20 latest aspects** for all entity-aspect pairs.
-2. Second, we read YAML files from the `/etc/datahub/plugins/retention` directory and overlay them on the default set of policies we provide.
-
-For docker, we set docker-compose to mount `${HOME}/.datahub` directory to `/etc/datahub` directory
-within the containers, so you can customize the initial set of retention policies by creating
-a `${HOME}/.datahub/plugins/retention/retention.yaml` file.
-
-We will support a standardized way to do this in Kubernetes setup in the near future.
-
-The format for the YAML file is as follows:
-
-```yaml
-- entity: "*" # denotes that policy will be applied to all entities
- aspect: "*" # denotes that policy will be applied to all aspects
- config:
- retention:
- version:
- maxVersions: 20
-- entity: "dataset"
- aspect: "datasetProperties"
- config:
- retention:
- version:
- maxVersions: 20
- time:
- maxAgeInSeconds: 2592000 # 30 days
-```
-
-Note, it searches for the policies corresponding to the entity, aspect pair in the following order:
-1. entity, aspect
-2. *, aspect
-3. entity, *
-4. *, *
-
-By restarting datahub-gms after creating the plugin yaml file, the new set of retention policies will be applied.
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
deleted file mode 100644
index 98392ac04c4c44a7f4e7b1c0808266875877dd1f..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from annotator.uniformer.mmseg.core import add_prefix
-from annotator.uniformer.mmseg.ops import resize
-from .. import builder
-from ..builder import SEGMENTORS
-from .base import BaseSegmentor
-
-
-@SEGMENTORS.register_module()
-class EncoderDecoder(BaseSegmentor):
- """Encoder Decoder segmentors.
-
- EncoderDecoder typically consists of backbone, decode_head, auxiliary_head.
- Note that auxiliary_head is only used for deep supervision during training,
- which could be dumped during inference.
- """
-
- def __init__(self,
- backbone,
- decode_head,
- neck=None,
- auxiliary_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(EncoderDecoder, self).__init__()
- self.backbone = builder.build_backbone(backbone)
- if neck is not None:
- self.neck = builder.build_neck(neck)
- self._init_decode_head(decode_head)
- self._init_auxiliary_head(auxiliary_head)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self.init_weights(pretrained=pretrained)
-
- assert self.with_decode_head
-
- def _init_decode_head(self, decode_head):
- """Initialize ``decode_head``"""
- self.decode_head = builder.build_head(decode_head)
- self.align_corners = self.decode_head.align_corners
- self.num_classes = self.decode_head.num_classes
-
- def _init_auxiliary_head(self, auxiliary_head):
- """Initialize ``auxiliary_head``"""
- if auxiliary_head is not None:
- if isinstance(auxiliary_head, list):
- self.auxiliary_head = nn.ModuleList()
- for head_cfg in auxiliary_head:
- self.auxiliary_head.append(builder.build_head(head_cfg))
- else:
- self.auxiliary_head = builder.build_head(auxiliary_head)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone and heads.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- super(EncoderDecoder, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- self.decode_head.init_weights()
- if self.with_auxiliary_head:
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for aux_head in self.auxiliary_head:
- aux_head.init_weights()
- else:
- self.auxiliary_head.init_weights()
-
- def extract_feat(self, img):
- """Extract features from images."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def encode_decode(self, img, img_metas):
- """Encode images with backbone and decode into a semantic segmentation
- map of the same size as input."""
- x = self.extract_feat(img)
- out = self._decode_head_forward_test(x, img_metas)
- out = resize(
- input=out,
- size=img.shape[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- return out
-
- def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for decode head in
- training."""
- losses = dict()
- loss_decode = self.decode_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
-
- losses.update(add_prefix(loss_decode, 'decode'))
- return losses
-
- def _decode_head_forward_test(self, x, img_metas):
- """Run forward function and calculate loss for decode head in
- inference."""
- seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg)
- return seg_logits
-
- def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for auxiliary head in
- training."""
- losses = dict()
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for idx, aux_head in enumerate(self.auxiliary_head):
- loss_aux = aux_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
- losses.update(add_prefix(loss_aux, f'aux_{idx}'))
- else:
- loss_aux = self.auxiliary_head.forward_train(
- x, img_metas, gt_semantic_seg, self.train_cfg)
- losses.update(add_prefix(loss_aux, 'aux'))
-
- return losses
-
- def forward_dummy(self, img):
- """Dummy forward function."""
- seg_logit = self.encode_decode(img, None)
-
- return seg_logit
-
- def forward_train(self, img, img_metas, gt_semantic_seg):
- """Forward function for training.
-
- Args:
- img (Tensor): Input images.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
-
- x = self.extract_feat(img)
-
- losses = dict()
-
- loss_decode = self._decode_head_forward_train(x, img_metas,
- gt_semantic_seg)
- losses.update(loss_decode)
-
- if self.with_auxiliary_head:
- loss_aux = self._auxiliary_head_forward_train(
- x, img_metas, gt_semantic_seg)
- losses.update(loss_aux)
-
- return losses
-
- # TODO refactor
- def slide_inference(self, img, img_meta, rescale):
- """Inference by sliding-window with overlap.
-
- If h_crop > h_img or w_crop > w_img, the small patch will be used to
- decode without padding.
- """
-
- h_stride, w_stride = self.test_cfg.stride
- h_crop, w_crop = self.test_cfg.crop_size
- batch_size, _, h_img, w_img = img.size()
- num_classes = self.num_classes
- h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1
- w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1
- preds = img.new_zeros((batch_size, num_classes, h_img, w_img))
- count_mat = img.new_zeros((batch_size, 1, h_img, w_img))
- for h_idx in range(h_grids):
- for w_idx in range(w_grids):
- y1 = h_idx * h_stride
- x1 = w_idx * w_stride
- y2 = min(y1 + h_crop, h_img)
- x2 = min(x1 + w_crop, w_img)
- y1 = max(y2 - h_crop, 0)
- x1 = max(x2 - w_crop, 0)
- crop_img = img[:, :, y1:y2, x1:x2]
- crop_seg_logit = self.encode_decode(crop_img, img_meta)
- preds += F.pad(crop_seg_logit,
- (int(x1), int(preds.shape[3] - x2), int(y1),
- int(preds.shape[2] - y2)))
-
- count_mat[:, :, y1:y2, x1:x2] += 1
- assert (count_mat == 0).sum() == 0
- if torch.onnx.is_in_onnx_export():
- # cast count_mat to constant while exporting to ONNX
- count_mat = torch.from_numpy(
- count_mat.cpu().detach().numpy()).to(device=img.device)
- preds = preds / count_mat
- if rescale:
- preds = resize(
- preds,
- size=img_meta[0]['ori_shape'][:2],
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
- return preds
-
- def whole_inference(self, img, img_meta, rescale):
- """Inference with full image."""
-
- seg_logit = self.encode_decode(img, img_meta)
- if rescale:
- # support dynamic shape for onnx
- if torch.onnx.is_in_onnx_export():
- size = img.shape[2:]
- else:
- size = img_meta[0]['ori_shape'][:2]
- seg_logit = resize(
- seg_logit,
- size=size,
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
-
- return seg_logit
-
- def inference(self, img, img_meta, rescale):
- """Inference with slide/whole style.
-
- Args:
- img (Tensor): The input image of shape (N, 3, H, W).
- img_meta (dict): Image info dict where each dict has: 'img_shape',
- 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- rescale (bool): Whether rescale back to original shape.
-
- Returns:
- Tensor: The output segmentation map.
- """
-
- assert self.test_cfg.mode in ['slide', 'whole']
- ori_shape = img_meta[0]['ori_shape']
- assert all(_['ori_shape'] == ori_shape for _ in img_meta)
- if self.test_cfg.mode == 'slide':
- seg_logit = self.slide_inference(img, img_meta, rescale)
- else:
- seg_logit = self.whole_inference(img, img_meta, rescale)
- output = F.softmax(seg_logit, dim=1)
- flip = img_meta[0]['flip']
- if flip:
- flip_direction = img_meta[0]['flip_direction']
- assert flip_direction in ['horizontal', 'vertical']
- if flip_direction == 'horizontal':
- output = output.flip(dims=(3, ))
- elif flip_direction == 'vertical':
- output = output.flip(dims=(2, ))
-
- return output
-
- def simple_test(self, img, img_meta, rescale=True):
- """Simple test with single image."""
- seg_logit = self.inference(img, img_meta, rescale)
- seg_pred = seg_logit.argmax(dim=1)
- if torch.onnx.is_in_onnx_export():
- # our inference backend only support 4D output
- seg_pred = seg_pred.unsqueeze(0)
- return seg_pred
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
-
- def aug_test(self, imgs, img_metas, rescale=True):
- """Test with augmentations.
-
- Only rescale=True is supported.
- """
- # aug_test rescale all imgs back to ori_shape for now
- assert rescale
- # to save memory, we get augmented seg logit inplace
- seg_logit = self.inference(imgs[0], img_metas[0], rescale)
- for i in range(1, len(imgs)):
- cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale)
- seg_logit += cur_seg_logit
- seg_logit /= len(imgs)
- seg_pred = seg_logit.argmax(dim=1)
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
diff --git a/spaces/adhisetiawan/anime-voice-generator/commons.py b/spaces/adhisetiawan/anime-voice-generator/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/adhisetiawan/anime-voice-generator/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/aijack/jojo/e4e/models/latent_codes_pool.py b/spaces/aijack/jojo/e4e/models/latent_codes_pool.py
deleted file mode 100644
index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000
--- a/spaces/aijack/jojo/e4e/models/latent_codes_pool.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import random
-import torch
-
-
-class LatentCodesPool:
- """This class implements latent codes buffer that stores previously generated w latent codes.
- This buffer enables us to update discriminators using a history of generated w's
- rather than the ones produced by the latest encoder.
- """
-
- def __init__(self, pool_size):
- """Initialize the ImagePool class
- Parameters:
- pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
- """
- self.pool_size = pool_size
- if self.pool_size > 0: # create an empty pool
- self.num_ws = 0
- self.ws = []
-
- def query(self, ws):
- """Return w's from the pool.
- Parameters:
- ws: the latest generated w's from the generator
- Returns w's from the buffer.
- By 50/100, the buffer will return input w's.
- By 50/100, the buffer will return w's previously stored in the buffer,
- and insert the current w's to the buffer.
- """
- if self.pool_size == 0: # if the buffer size is 0, do nothing
- return ws
- return_ws = []
- for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512)
- # w = torch.unsqueeze(image.data, 0)
- if w.ndim == 2:
- i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate
- w = w[i]
- self.handle_w(w, return_ws)
- return_ws = torch.stack(return_ws, 0) # collect all the images and return
- return return_ws
-
- def handle_w(self, w, return_ws):
- if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer
- self.num_ws = self.num_ws + 1
- self.ws.append(w)
- return_ws.append(w)
- else:
- p = random.uniform(0, 1)
- if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer
- random_id = random.randint(0, self.pool_size - 1) # randint is inclusive
- tmp = self.ws[random_id].clone()
- self.ws[random_id] = w
- return_ws.append(tmp)
- else: # by another 50% chance, the buffer will return the current image
- return_ws.append(w)
diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py
deleted file mode 100644
index 52594f62693e6bf48a4c140ba2fe7131a0317774..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py
+++ /dev/null
@@ -1,461 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
-import logging
-import fvcore.nn.weight_init as weight_init
-from typing import Optional
-import torch
-from torch import nn, Tensor
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d
-
-from .position_encoding import PositionEmbeddingSine
-from .maskformer_transformer_decoder import TRANSFORMER_DECODER_REGISTRY
-
-
-class SelfAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- q = k = self.with_pos_embed(tgt, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.norm(tgt)
- q = k = self.with_pos_embed(tgt2, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- if self.normalize_before:
- return self.forward_pre(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
- return self.forward_post(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
-
-
-class CrossAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.norm(tgt)
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- if self.normalize_before:
- return self.forward_pre(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
- return self.forward_post(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
-
-
-class FFNLayer(nn.Module):
-
- def __init__(self, d_model, dim_feedforward=2048, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm = nn.LayerNorm(d_model)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt):
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
- return tgt
-
- def forward_pre(self, tgt):
- tgt2 = self.norm(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
- tgt = tgt + self.dropout(tgt2)
- return tgt
-
- def forward(self, tgt):
- if self.normalize_before:
- return self.forward_pre(tgt)
- return self.forward_post(tgt)
-
-
-def _get_activation_fn(activation):
- """Return an activation function given a string"""
- if activation == "relu":
- return F.relu
- if activation == "gelu":
- return F.gelu
- if activation == "glu":
- return F.glu
- raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
-
-
-class MLP(nn.Module):
- """ Very simple multi-layer perceptron (also called FFN)"""
-
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
-
-
-@TRANSFORMER_DECODER_REGISTRY.register()
-class MultiScaleMaskedTransformerDecoder(nn.Module):
-
- _version = 2
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- version = local_metadata.get("version", None)
- if version is None or version < 2:
- # Do not warn if train from scratch
- scratch = True
- logger = logging.getLogger(__name__)
- for k in list(state_dict.keys()):
- newk = k
- if "static_query" in k:
- newk = k.replace("static_query", "query_feat")
- if newk != k:
- state_dict[newk] = state_dict[k]
- del state_dict[k]
- scratch = False
-
- if not scratch:
- logger.warning(
- f"Weight format of {self.__class__.__name__} have changed! "
- "Please upgrade your models. Applying automatic conversion now ..."
- )
-
- @configurable
- def __init__(
- self,
- in_channels,
- mask_classification=True,
- *,
- num_classes: int,
- hidden_dim: int,
- num_queries: int,
- nheads: int,
- dim_feedforward: int,
- dec_layers: int,
- pre_norm: bool,
- mask_dim: int,
- enforce_input_project: bool,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- in_channels: channels of the input features
- mask_classification: whether to add mask classifier or not
- num_classes: number of classes
- hidden_dim: Transformer feature dimension
- num_queries: number of queries
- nheads: number of heads
- dim_feedforward: feature dimension in feedforward network
- enc_layers: number of Transformer encoder layers
- dec_layers: number of Transformer decoder layers
- pre_norm: whether to use pre-LayerNorm or not
- mask_dim: mask feature dimension
- enforce_input_project: add input project 1x1 conv even if input
- channels and hidden dim is identical
- """
- super().__init__()
-
- assert mask_classification, "Only support mask classification model"
- self.mask_classification = mask_classification
-
- # positional encoding
- N_steps = hidden_dim // 2
- self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
-
- # define Transformer decoder here
- self.num_heads = nheads
- self.num_layers = dec_layers
- self.transformer_self_attention_layers = nn.ModuleList()
- self.transformer_cross_attention_layers = nn.ModuleList()
- self.transformer_ffn_layers = nn.ModuleList()
-
- for _ in range(self.num_layers):
- self.transformer_self_attention_layers.append(
- SelfAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_cross_attention_layers.append(
- CrossAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_ffn_layers.append(
- FFNLayer(
- d_model=hidden_dim,
- dim_feedforward=dim_feedforward,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.decoder_norm = nn.LayerNorm(hidden_dim)
-
- self.num_queries = num_queries
- # learnable query features
- self.query_feat = nn.Embedding(num_queries, hidden_dim)
- # learnable query p.e.
- self.query_embed = nn.Embedding(num_queries, hidden_dim)
-
- # level embedding (we always use 3 scales)
- self.num_feature_levels = 3
- self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim)
- self.input_proj = nn.ModuleList()
- for _ in range(self.num_feature_levels):
- if in_channels != hidden_dim or enforce_input_project:
- self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1))
- weight_init.c2_xavier_fill(self.input_proj[-1])
- else:
- self.input_proj.append(nn.Sequential())
-
- # output FFNs
- if self.mask_classification:
- self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
- self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
-
- @classmethod
- def from_config(cls, cfg, in_channels, mask_classification):
- ret = {}
- ret["in_channels"] = in_channels
- ret["mask_classification"] = mask_classification
-
- ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
- ret["hidden_dim"] = cfg.MODEL.MASK_FORMER.HIDDEN_DIM
- ret["num_queries"] = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES
- # Transformer parameters:
- ret["nheads"] = cfg.MODEL.MASK_FORMER.NHEADS
- ret["dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD
-
- # NOTE: because we add learnable query features which requires supervision,
- # we add minus 1 to decoder layers to be consistent with our loss
- # implementation: that is, number of auxiliary losses is always
- # equal to number of decoder layers. With learnable query features, the number of
- # auxiliary losses equals number of decoders plus 1.
- assert cfg.MODEL.MASK_FORMER.DEC_LAYERS >= 1
- ret["dec_layers"] = cfg.MODEL.MASK_FORMER.DEC_LAYERS - 1
- ret["pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM
- ret["enforce_input_project"] = cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ
-
- ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
-
- return ret
-
- def forward(self, x, mask_features, mask = None):
- # x is a list of multi-scale feature
- assert len(x) == self.num_feature_levels
- src = []
- pos = []
- size_list = []
-
- # disable mask, it does not affect performance
- del mask
-
- for i in range(self.num_feature_levels):
- size_list.append(x[i].shape[-2:])
- pos.append(self.pe_layer(x[i], None).flatten(2))
- src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None])
-
- # flatten NxCxHxW to HWxNxC
- pos[-1] = pos[-1].permute(2, 0, 1)
- src[-1] = src[-1].permute(2, 0, 1)
-
- _, bs, _ = src[0].shape
-
- # QxNxC
- query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1)
- output = self.query_feat.weight.unsqueeze(1).repeat(1, bs, 1)
-
- predictions_class = []
- predictions_mask = []
-
- # prediction heads on learnable query features
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0])
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- for i in range(self.num_layers):
- level_index = i % self.num_feature_levels
- attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False
- # attention: cross-attention first
- output = self.transformer_cross_attention_layers[i](
- output, src[level_index],
- memory_mask=attn_mask,
- memory_key_padding_mask=None, # here we do not apply masking on padded region
- pos=pos[level_index], query_pos=query_embed
- )
-
- output = self.transformer_self_attention_layers[i](
- output, tgt_mask=None,
- tgt_key_padding_mask=None,
- query_pos=query_embed
- )
-
- # FFN
- output = self.transformer_ffn_layers[i](
- output
- )
-
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels])
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- assert len(predictions_class) == self.num_layers + 1
-
- out = {
- 'pred_logits': predictions_class[-1],
- 'pred_masks': predictions_mask[-1],
- 'aux_outputs': self._set_aux_loss(
- predictions_class if self.mask_classification else None, predictions_mask
- )
- }
- return out
-
- def forward_prediction_heads(self, output, mask_features, attn_mask_target_size):
- decoder_output = self.decoder_norm(output)
- decoder_output = decoder_output.transpose(0, 1)
- outputs_class = self.class_embed(decoder_output)
- mask_embed = self.mask_embed(decoder_output)
- outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
-
- # NOTE: prediction is of higher-resolution
- # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW]
- attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False)
- # must use bool type
- # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged.
- attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool()
- attn_mask = attn_mask.detach()
-
- return outputs_class, outputs_mask, attn_mask
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_seg_masks):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- if self.mask_classification:
- return [
- {"pred_logits": a, "pred_masks": b}
- for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
- ]
- else:
- return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/models/pytorch_modules.py b/spaces/akhaliq/Music_Source_Separation/bytesep/models/pytorch_modules.py
deleted file mode 100644
index 0bc51f0945d2764b8428611a8ecf109a0b344884..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Music_Source_Separation/bytesep/models/pytorch_modules.py
+++ /dev/null
@@ -1,204 +0,0 @@
-from typing import List, NoReturn
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def init_embedding(layer: nn.Module) -> NoReturn:
- r"""Initialize a Linear or Convolutional layer."""
- nn.init.uniform_(layer.weight, -1.0, 1.0)
-
- if hasattr(layer, 'bias'):
- if layer.bias is not None:
- layer.bias.data.fill_(0.0)
-
-
-def init_layer(layer: nn.Module) -> NoReturn:
- r"""Initialize a Linear or Convolutional layer."""
- nn.init.xavier_uniform_(layer.weight)
-
- if hasattr(layer, "bias"):
- if layer.bias is not None:
- layer.bias.data.fill_(0.0)
-
-
-def init_bn(bn: nn.Module) -> NoReturn:
- r"""Initialize a Batchnorm layer."""
- bn.bias.data.fill_(0.0)
- bn.weight.data.fill_(1.0)
- bn.running_mean.data.fill_(0.0)
- bn.running_var.data.fill_(1.0)
-
-
-def act(x: torch.Tensor, activation: str) -> torch.Tensor:
-
- if activation == "relu":
- return F.relu_(x)
-
- elif activation == "leaky_relu":
- return F.leaky_relu_(x, negative_slope=0.01)
-
- elif activation == "swish":
- return x * torch.sigmoid(x)
-
- else:
- raise Exception("Incorrect activation!")
-
-
-class Base:
- def __init__(self):
- r"""Base function for extracting spectrogram, cos, and sin, etc."""
- pass
-
- def spectrogram(self, input: torch.Tensor, eps: float = 0.0) -> torch.Tensor:
- r"""Calculate spectrogram.
-
- Args:
- input: (batch_size, segments_num)
- eps: float
-
- Returns:
- spectrogram: (batch_size, time_steps, freq_bins)
- """
- (real, imag) = self.stft(input)
- return torch.clamp(real ** 2 + imag ** 2, eps, np.inf) ** 0.5
-
- def spectrogram_phase(
- self, input: torch.Tensor, eps: float = 0.0
- ) -> List[torch.Tensor]:
- r"""Calculate the magnitude, cos, and sin of the STFT of input.
-
- Args:
- input: (batch_size, segments_num)
- eps: float
-
- Returns:
- mag: (batch_size, time_steps, freq_bins)
- cos: (batch_size, time_steps, freq_bins)
- sin: (batch_size, time_steps, freq_bins)
- """
- (real, imag) = self.stft(input)
- mag = torch.clamp(real ** 2 + imag ** 2, eps, np.inf) ** 0.5
- cos = real / mag
- sin = imag / mag
- return mag, cos, sin
-
- def wav_to_spectrogram_phase(
- self, input: torch.Tensor, eps: float = 1e-10
- ) -> List[torch.Tensor]:
- r"""Convert waveforms to magnitude, cos, and sin of STFT.
-
- Args:
- input: (batch_size, channels_num, segment_samples)
- eps: float
-
- Outputs:
- mag: (batch_size, channels_num, time_steps, freq_bins)
- cos: (batch_size, channels_num, time_steps, freq_bins)
- sin: (batch_size, channels_num, time_steps, freq_bins)
- """
- batch_size, channels_num, segment_samples = input.shape
-
- # Reshape input with shapes of (n, segments_num) to meet the
- # requirements of the stft function.
- x = input.reshape(batch_size * channels_num, segment_samples)
-
- mag, cos, sin = self.spectrogram_phase(x, eps=eps)
- # mag, cos, sin: (batch_size * channels_num, 1, time_steps, freq_bins)
-
- _, _, time_steps, freq_bins = mag.shape
- mag = mag.reshape(batch_size, channels_num, time_steps, freq_bins)
- cos = cos.reshape(batch_size, channels_num, time_steps, freq_bins)
- sin = sin.reshape(batch_size, channels_num, time_steps, freq_bins)
-
- return mag, cos, sin
-
- def wav_to_spectrogram(
- self, input: torch.Tensor, eps: float = 1e-10
- ) -> List[torch.Tensor]:
-
- mag, cos, sin = self.wav_to_spectrogram_phase(input, eps)
- return mag
-
-
-class Subband:
- def __init__(self, subbands_num: int):
- r"""Warning!! This class is not used!!
-
- This class does not work as good as [1] which split subbands in the
- time-domain. Please refere to [1] for formal implementation.
-
- [1] Liu, Haohe, et al. "Channel-wise subband input for better voice and
- accompaniment separation on high resolution music." arXiv preprint arXiv:2008.05216 (2020).
-
- Args:
- subbands_num: int, e.g., 4
- """
- self.subbands_num = subbands_num
-
- def analysis(self, x: torch.Tensor) -> torch.Tensor:
- r"""Analysis time-frequency representation into subbands. Stack the
- subbands along the channel axis.
-
- Args:
- x: (batch_size, channels_num, time_steps, freq_bins)
-
- Returns:
- output: (batch_size, channels_num * subbands_num, time_steps, freq_bins // subbands_num)
- """
- batch_size, channels_num, time_steps, freq_bins = x.shape
-
- x = x.reshape(
- batch_size,
- channels_num,
- time_steps,
- self.subbands_num,
- freq_bins // self.subbands_num,
- )
- # x: (batch_size, channels_num, time_steps, subbands_num, freq_bins // subbands_num)
-
- x = x.transpose(2, 3)
-
- output = x.reshape(
- batch_size,
- channels_num * self.subbands_num,
- time_steps,
- freq_bins // self.subbands_num,
- )
- # output: (batch_size, channels_num * subbands_num, time_steps, freq_bins // subbands_num)
-
- return output
-
- def synthesis(self, x: torch.Tensor) -> torch.Tensor:
- r"""Synthesis subband time-frequency representations into original
- time-frequency representation.
-
- Args:
- x: (batch_size, channels_num * subbands_num, time_steps, freq_bins // subbands_num)
-
- Returns:
- output: (batch_size, channels_num, time_steps, freq_bins)
- """
- batch_size, subband_channels_num, time_steps, subband_freq_bins = x.shape
-
- channels_num = subband_channels_num // self.subbands_num
- freq_bins = subband_freq_bins * self.subbands_num
-
- x = x.reshape(
- batch_size,
- channels_num,
- self.subbands_num,
- time_steps,
- subband_freq_bins,
- )
- # x: (batch_size, channels_num, subbands_num, time_steps, freq_bins // subbands_num)
-
- x = x.transpose(2, 3)
- # x: (batch_size, channels_num, time_steps, subbands_num, freq_bins // subbands_num)
-
- output = x.reshape(batch_size, channels_num, time_steps, freq_bins)
- # x: (batch_size, channels_num, time_steps, freq_bins)
-
- return output
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/argparsers.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/argparsers.py
deleted file mode 100644
index 4a48adb050db49a0ba8f9e0e773818f568beba08..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/argparsers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import argparse
-
-io_parser = argparse.ArgumentParser(add_help=False)
-io_parser.add_argument(
- "-i",
- "--input-files-dir",
- help="Path of the directory containing the files to be converted.",
- type=str,
- action="store",
- dest="input_dir",
- required=True,
-)
-io_parser.add_argument(
- "-o",
- "--output-files-dir",
- help="Path of the directory in which the converted files will be saved.",
- type=str,
- action="store",
- dest="output_dir",
- required=True,
-)
-
-ss_parser = argparse.ArgumentParser(add_help=False)
-ss_parser.add_argument(
- "-ss",
- "--split-sentences",
- help="ROUGE assumes one sentence per line as default summary format. Use "
- "this flag to split sentences using NLTK if the summary texts have "
- "another format.",
- action="store_true",
- dest="split_sents",
-)
-
-rouge_path_parser = argparse.ArgumentParser(add_help=False)
-rouge_path_parser.add_argument(
- "-hd",
- "--home-dir",
- help="Path of the directory containing ROUGE-1.5.5.pl.",
- type=str,
- action="store",
- dest="rouge_home",
- required=True,
-)
-
-model_sys_parser = argparse.ArgumentParser(add_help=False)
-model_sys_parser.add_argument(
- "-mfp",
- "--model-fn-pattern",
- help="Regexp matching model filenames.",
- type=str,
- action="store",
- dest="model_filename_pattern",
- required=True,
-)
-model_sys_parser.add_argument(
- "-sfp",
- "--system-fn-pattern",
- help="Regexp matching system filenames.",
- type=str,
- action="store",
- dest="system_filename_pattern",
- required=True,
-)
-model_sys_parser.add_argument(
- "-m",
- "--model-dir",
- help="Path of the directory containing model summaries.",
- type=str,
- action="store",
- dest="model_dir",
- required=True,
-)
-model_sys_parser.add_argument(
- "-s",
- "--system-dir",
- help="Path of the directory containing system summaries.",
- type=str,
- action="store",
- dest="system_dir",
- required=True,
-)
-model_sys_parser.add_argument(
- "-id",
- "--system-id",
- help="Optional system ID. This is useful when comparing several systems.",
- action="store",
- dest="system_id",
-)
-
-config_parser = argparse.ArgumentParser(add_help=False)
-config_parser.add_argument(
- "-c",
- "--config-file-path",
- help="Path of configfile to be written, including file name.",
- type=str,
- action="store",
- dest="config_file_path",
- required=True,
-)
-
-main_parser = argparse.ArgumentParser(parents=[model_sys_parser], add_help=False)
-main_parser.add_argument(
- "-hd",
- "--home-dir",
- help="Path of the directory containing ROUGE-1.5.5.pl.",
- type=str,
- action="store",
- dest="rouge_home",
-)
-main_parser.add_argument(
- "-rargs",
- "--rouge-args",
- help="Override pyrouge default ROUGE command line options with the "
- "ROUGE_ARGS string, enclosed in qoutation marks.",
- type=str,
- action="store",
- dest="rouge_args",
-)
diff --git a/spaces/akhaliq/deeplab2/model/layers/blocks.py b/spaces/akhaliq/deeplab2/model/layers/blocks.py
deleted file mode 100644
index 3e46651aeaacf1e416ffa19b43de433f2031cc31..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/model/layers/blocks.py
+++ /dev/null
@@ -1,211 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Implements building blocks for neural networks."""
-from typing import Optional
-
-from absl import logging
-
-import tensorflow as tf
-
-from deeplab2.model import utils
-from deeplab2.model.layers import convolutions
-from deeplab2.model.layers import squeeze_and_excite
-
-backend = tf.keras.backend
-layers = tf.keras.layers
-
-
-class InvertedBottleneckBlock(tf.keras.layers.Layer):
- """An inverted bottleneck block.
-
- Reference:
- Sandler, M., Howard, A., et al. Mobilenetv2: Inverted residuals and linear
- bottlenecks. In CVPR, 2018
- Howard, A., Sandler, M., et al. Searching for mobilenetv3. In ICCV, 2019
- """
-
- def __init__(self,
- in_filters: int,
- out_filters: int,
- expand_ratio: int,
- strides: int,
- kernel_size: int = 3,
- se_ratio: Optional[float] = None,
- activation: str = 'relu',
- se_inner_activation: str = 'relu',
- se_gating_activation: str = 'sigmoid',
- depthwise_activation: Optional[str] = None,
- expand_se_in_filters: bool = False,
- atrous_rate: int = 1,
- divisible_by: int = 1,
- bn_layer: layers.Layer = tf.keras.layers.BatchNormalization,
- conv_kernel_weight_decay: float = 0.0,
- regularize_depthwise: bool = False,
- use_depthwise: bool = True,
- use_residual: bool = True,
- name: Optional[str] = None):
- """Initializes an inverted bottleneck block with BN after convolutions.
-
- Args:
- in_filters: The number of filters of the input tensor.
- out_filters: The number of filters of the output tensor.
- expand_ratio: The expand_ratio for an inverted bottleneck block. If
- expand_ratio is <= 1, this argument will be ignored.
- strides: The number of stride. If greater than 1, this block will
- ultimately downsample the input.
- kernel_size: The kernel size of the depthwise conv layer.
- se_ratio: If not None, se ratio for the squeeze and excitation layer.
- activation: The name of the activation function.
- se_inner_activation: The name of squeeze-excitation inner activation.
- se_gating_activation: The name of squeeze-excitation gating activation.
- depthwise_activation: The name of the activation function for depthwise
- only.
- expand_se_in_filters: Whether or not to expand in_filter in squeeze and
- excitation layer.
- atrous_rate: The atrous dilation rate to use for.
- divisible_by: A number that all inner dimensions are divisible by.
- bn_layer: An optional tf.keras.layers.Layer that computes the
- normalization (default: tf.keras.layers.BatchNormalization).
- conv_kernel_weight_decay: The weight decay for convolution kernels.
- regularize_depthwise: Whether or not apply regularization on depthwise.
- use_depthwise: Whether to uses standard convolutions instead of depthwise.
- use_residual: Whether to include residual connection between input and
- output.
- name: Name for the block.
- """
- super(InvertedBottleneckBlock, self).__init__(name=name)
-
- self._in_filters = in_filters
- self._out_filters = out_filters
- self._expand_ratio = expand_ratio
- self._strides = strides
- self._kernel_size = kernel_size
- self._se_ratio = se_ratio
- self._divisible_by = divisible_by
- self._atrous_rate = atrous_rate
- self._regularize_depthwise = regularize_depthwise
- self._use_depthwise = use_depthwise
- self._use_residual = use_residual
- self._activation = activation
- self._se_inner_activation = se_inner_activation
- self._se_gating_activation = se_gating_activation
- self._depthwise_activation = depthwise_activation
- self._expand_se_in_filters = expand_se_in_filters
-
- if tf.keras.backend.image_data_format() == 'channels_last':
- self._bn_axis = -1
- else:
- self._bn_axis = 1
-
- if depthwise_activation is None:
- self._depthwise_activation = activation
-
- if regularize_depthwise:
- depthwise_kernel_weight_decay = conv_kernel_weight_decay
- else:
- depthwise_kernel_weight_decay = 0.0
-
- if self._expand_ratio <= 1 and not self._use_depthwise:
- raise ValueError(
- 'Undefined behavior if expand_ratio <= 1 and not use_depthwise')
-
- expand_filters = self._in_filters
- if self._expand_ratio > 1:
- # First 1x1 conv for channel expansion.
- expand_filters = utils.make_divisible(
- self._in_filters * self._expand_ratio, self._divisible_by)
-
- expand_kernel = 1 if self._use_depthwise else self._kernel_size
- expand_stride = 1 if self._use_depthwise else self._strides
-
- self._conv1_bn_act = convolutions.Conv2DSame(
- output_channels=expand_filters,
- kernel_size=expand_kernel,
- strides=expand_stride,
- atrous_rate=1,
- use_bias=False,
- use_bn=True,
- bn_layer=bn_layer,
- activation=self._activation,
- conv_kernel_weight_decay=conv_kernel_weight_decay,
- name='expand_conv')
-
- if self._use_depthwise:
- # Depthwise conv.
- self._conv2_bn_act = convolutions.DepthwiseConv2DSame(
- kernel_size=self._kernel_size,
- strides=self._strides,
- atrous_rate=self._atrous_rate,
- use_bias=False,
- use_bn=True,
- bn_layer=bn_layer,
- activation=self._depthwise_activation,
- name='depthwise_conv')
-
- # Squeeze and excitation.
- if self._se_ratio is not None and self._se_ratio > 0:
- if self._expand_se_in_filters:
- in_filters = expand_filters
- else:
- in_filters = self._in_filters
- self._squeeze_excitation = squeeze_and_excite.SqueezeAndExcite(
- in_filters=in_filters,
- out_filters=expand_filters,
- se_ratio=self._se_ratio,
- divisible_by=self._divisible_by,
- kernel_initializer='he_normal',
- kernel_regularizer=tf.keras.regularizers.l2(conv_kernel_weight_decay),
- activation=self._se_inner_activation,
- gating_activation=self._se_gating_activation,
- name=name + '_se')
- else:
- logging.info(
- 'Squeeze and Excitation is skipped due to undefined se_ratio')
- self._squeeze_excitation = None
-
- # Last 1x1 conv.
- self._conv3_bn = convolutions.Conv2DSame(
- output_channels=self._out_filters,
- kernel_size=1,
- strides=1,
- atrous_rate=1,
- use_bias=False,
- use_bn=True,
- bn_layer=bn_layer,
- activation=None,
- conv_kernel_weight_decay=conv_kernel_weight_decay,
- name='project_conv')
-
- def call(self, inputs, training=None):
- shortcut = inputs
- if self._expand_ratio > 1:
- x = self._conv1_bn_act(inputs, training=training)
- else:
- x = inputs
-
- if self._use_depthwise:
- x = self._conv2_bn_act(x, training=training)
-
- if self._squeeze_excitation is not None:
- x = self._squeeze_excitation(x)
-
- x = self._conv3_bn(x, training=training)
-
- if (self._use_residual and
- self._in_filters == self._out_filters):
- x = tf.add(x, shortcut)
-
- return x
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/wheel.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/wheel.py
deleted file mode 100644
index d5b20dc9f9e90fb8e6a4863f3b143161e79e8e80..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/wheel.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import logging
-import os
-import shutil
-from optparse import Values
-from typing import List
-
-from pip._internal.cache import WheelCache
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.req_command import RequirementCommand, with_cleanup
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.exceptions import CommandError
-from pip._internal.req.req_install import InstallRequirement
-from pip._internal.req.req_tracker import get_requirement_tracker
-from pip._internal.utils.misc import ensure_dir, normalize_path
-from pip._internal.utils.temp_dir import TempDirectory
-from pip._internal.wheel_builder import build, should_build_for_wheel_command
-
-logger = logging.getLogger(__name__)
-
-
-class WheelCommand(RequirementCommand):
- """
- Build Wheel archives for your requirements and dependencies.
-
- Wheel is a built-package format, and offers the advantage of not
- recompiling your software during every install. For more details, see the
- wheel docs: https://wheel.readthedocs.io/en/latest/
-
- Requirements: setuptools>=0.8, and wheel.
-
- 'pip wheel' uses the bdist_wheel setuptools extension from the wheel
- package to build individual wheels.
-
- """
-
- usage = """
- %prog [options] ...
- %prog [options] -r ...
- %prog [options] [-e] ...
- %prog [options] [-e] ...
- %prog [options] ..."""
-
- def add_options(self) -> None:
-
- self.cmd_opts.add_option(
- "-w",
- "--wheel-dir",
- dest="wheel_dir",
- metavar="dir",
- default=os.curdir,
- help=(
- "Build wheels into , where the default is the "
- "current working directory."
- ),
- )
- self.cmd_opts.add_option(cmdoptions.no_binary())
- self.cmd_opts.add_option(cmdoptions.only_binary())
- self.cmd_opts.add_option(cmdoptions.prefer_binary())
- self.cmd_opts.add_option(cmdoptions.no_build_isolation())
- self.cmd_opts.add_option(cmdoptions.use_pep517())
- self.cmd_opts.add_option(cmdoptions.no_use_pep517())
- self.cmd_opts.add_option(cmdoptions.constraints())
- self.cmd_opts.add_option(cmdoptions.editable())
- self.cmd_opts.add_option(cmdoptions.requirements())
- self.cmd_opts.add_option(cmdoptions.src())
- self.cmd_opts.add_option(cmdoptions.ignore_requires_python())
- self.cmd_opts.add_option(cmdoptions.no_deps())
- self.cmd_opts.add_option(cmdoptions.progress_bar())
-
- self.cmd_opts.add_option(
- "--no-verify",
- dest="no_verify",
- action="store_true",
- default=False,
- help="Don't verify if built wheel is valid.",
- )
-
- self.cmd_opts.add_option(cmdoptions.build_options())
- self.cmd_opts.add_option(cmdoptions.global_options())
-
- self.cmd_opts.add_option(
- "--pre",
- action="store_true",
- default=False,
- help=(
- "Include pre-release and development versions. By default, "
- "pip only finds stable versions."
- ),
- )
-
- self.cmd_opts.add_option(cmdoptions.require_hashes())
-
- index_opts = cmdoptions.make_option_group(
- cmdoptions.index_group,
- self.parser,
- )
-
- self.parser.insert_option_group(0, index_opts)
- self.parser.insert_option_group(0, self.cmd_opts)
-
- @with_cleanup
- def run(self, options: Values, args: List[str]) -> int:
- cmdoptions.check_install_build_global(options)
-
- session = self.get_default_session(options)
-
- finder = self._build_package_finder(options, session)
- wheel_cache = WheelCache(options.cache_dir, options.format_control)
-
- options.wheel_dir = normalize_path(options.wheel_dir)
- ensure_dir(options.wheel_dir)
-
- req_tracker = self.enter_context(get_requirement_tracker())
-
- directory = TempDirectory(
- delete=not options.no_clean,
- kind="wheel",
- globally_managed=True,
- )
-
- reqs = self.get_requirements(args, options, finder, session)
-
- preparer = self.make_requirement_preparer(
- temp_build_dir=directory,
- options=options,
- req_tracker=req_tracker,
- session=session,
- finder=finder,
- download_dir=options.wheel_dir,
- use_user_site=False,
- verbosity=self.verbosity,
- )
-
- resolver = self.make_resolver(
- preparer=preparer,
- finder=finder,
- options=options,
- wheel_cache=wheel_cache,
- ignore_requires_python=options.ignore_requires_python,
- use_pep517=options.use_pep517,
- )
-
- self.trace_basic_info(finder)
-
- requirement_set = resolver.resolve(reqs, check_supported_wheels=True)
-
- reqs_to_build: List[InstallRequirement] = []
- for req in requirement_set.requirements.values():
- if req.is_wheel:
- preparer.save_linked_requirement(req)
- elif should_build_for_wheel_command(req):
- reqs_to_build.append(req)
-
- # build wheels
- build_successes, build_failures = build(
- reqs_to_build,
- wheel_cache=wheel_cache,
- verify=(not options.no_verify),
- build_options=options.build_options or [],
- global_options=options.global_options or [],
- )
- for req in build_successes:
- assert req.link and req.link.is_wheel
- assert req.local_file_path
- # copy from cache to target directory
- try:
- shutil.copy(req.local_file_path, options.wheel_dir)
- except OSError as e:
- logger.warning(
- "Building wheel for %s failed: %s",
- req.name,
- e,
- )
- build_failures.append(req)
- if len(build_failures) != 0:
- raise CommandError("Failed to build one or more wheels")
-
- return SUCCESS
diff --git a/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/fused_act.py b/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/fused_act.py
deleted file mode 100644
index 973a84fffde53668d31397da5fb993bbc95f7be0..0000000000000000000000000000000000000000
--- a/spaces/algomuffin/jojo_fork/e4e/models/stylegan2/op/fused_act.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-module_path = os.path.dirname(__file__)
-fused = load(
- 'fused',
- sources=[
- os.path.join(module_path, 'fused_bias_act.cpp'),
- os.path.join(module_path, 'fused_bias_act_kernel.cu'),
- ],
-)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/aliabid94/AutoGPT/tests/test_prompt_generator.py b/spaces/aliabid94/AutoGPT/tests/test_prompt_generator.py
deleted file mode 100644
index 6a0bfd6c7bbdbfaa3750e9dee621bd25e17a448b..0000000000000000000000000000000000000000
--- a/spaces/aliabid94/AutoGPT/tests/test_prompt_generator.py
+++ /dev/null
@@ -1,114 +0,0 @@
-from unittest import TestCase
-
-from autogpt.promptgenerator import PromptGenerator
-
-
-class TestPromptGenerator(TestCase):
- """
- Test cases for the PromptGenerator class, which is responsible for generating
- prompts for the AI with constraints, commands, resources, and performance evaluations.
- """
-
- @classmethod
- def setUpClass(cls):
- """
- Set up the initial state for each test method by creating an instance of PromptGenerator.
- """
- cls.generator = PromptGenerator()
-
- # Test whether the add_constraint() method adds a constraint to the generator's constraints list
- def test_add_constraint(self):
- """
- Test if the add_constraint() method adds a constraint to the generator's constraints list.
- """
- constraint = "Constraint1"
- self.generator.add_constraint(constraint)
- self.assertIn(constraint, self.generator.constraints)
-
- # Test whether the add_command() method adds a command to the generator's commands list
- def test_add_command(self):
- """
- Test if the add_command() method adds a command to the generator's commands list.
- """
- command_label = "Command Label"
- command_name = "command_name"
- args = {"arg1": "value1", "arg2": "value2"}
- self.generator.add_command(command_label, command_name, args)
- command = {
- "label": command_label,
- "name": command_name,
- "args": args,
- }
- self.assertIn(command, self.generator.commands)
-
- def test_add_resource(self):
- """
- Test if the add_resource() method adds a resource to the generator's resources list.
- """
- resource = "Resource1"
- self.generator.add_resource(resource)
- self.assertIn(resource, self.generator.resources)
-
- def test_add_performance_evaluation(self):
- """
- Test if the add_performance_evaluation() method adds an evaluation to the generator's
- performance_evaluation list.
- """
- evaluation = "Evaluation1"
- self.generator.add_performance_evaluation(evaluation)
- self.assertIn(evaluation, self.generator.performance_evaluation)
-
- def test_generate_prompt_string(self):
- """
- Test if the generate_prompt_string() method generates a prompt string with all the added
- constraints, commands, resources, and evaluations.
- """
- # Define the test data
- constraints = ["Constraint1", "Constraint2"]
- commands = [
- {
- "label": "Command1",
- "name": "command_name1",
- "args": {"arg1": "value1"},
- },
- {
- "label": "Command2",
- "name": "command_name2",
- "args": {},
- },
- ]
- resources = ["Resource1", "Resource2"]
- evaluations = ["Evaluation1", "Evaluation2"]
-
- # Add test data to the generator
- for constraint in constraints:
- self.generator.add_constraint(constraint)
- for command in commands:
- self.generator.add_command(
- command["label"], command["name"], command["args"]
- )
- for resource in resources:
- self.generator.add_resource(resource)
- for evaluation in evaluations:
- self.generator.add_performance_evaluation(evaluation)
-
- # Generate the prompt string and verify its correctness
- prompt_string = self.generator.generate_prompt_string()
- self.assertIsNotNone(prompt_string)
-
- # Check if all constraints, commands, resources, and evaluations are present in the prompt string
- for constraint in constraints:
- self.assertIn(constraint, prompt_string)
- for command in commands:
- self.assertIn(command["name"], prompt_string)
- for key, value in command["args"].items():
- self.assertIn(f'"{key}": "{value}"', prompt_string)
- for resource in resources:
- self.assertIn(resource, prompt_string)
- for evaluation in evaluations:
- self.assertIn(evaluation, prompt_string)
-
- self.assertIn("constraints", prompt_string.lower())
- self.assertIn("commands", prompt_string.lower())
- self.assertIn("resources", prompt_string.lower())
- self.assertIn("performance evaluation", prompt_string.lower())
diff --git a/spaces/allknowingroger/Image-Models-Test117/README.md b/spaces/allknowingroger/Image-Models-Test117/README.md
deleted file mode 100644
index 782bb897f15150d3687de591394b7525b4198202..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test117/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-duplicated_from: allknowingroger/Image-Models-Test116
----
-
-
\ No newline at end of file
diff --git a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_1v1.py b/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_1v1.py
deleted file mode 100644
index 93fadd14403ca611b0d1355c0cb34526586e04ec..0000000000000000000000000000000000000000
--- a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_display_1v1.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import numpy as np
-import h5py
-import os
-
-import mercury as mr
-
-import sys
-sys.path.append('/plot_scripts/')
-from map_packages_colors_1v1 import *
-from plot_scripts_1v1 import *
-
-# print(" Tasks to choose from : ")
-# print(" Heisenberg dynamics (hdyn), Random Quantum Circuits (rqc), Quantum Fourier Transform (qft)")
-# print("###############################")
-# print(" Package list to choose from :")
-# print(" cirq, hybridq, intel_qs_cpp, pennylane_l, projectq, qcgpu, qibojit, qrack_sch, qsimcirq, quest, svsim, yao, hiq, pennylane, qibo, qiskit, qulacs")
-# print("###############################")
-# print(" Compute capability choices for packages :")
-# print(" singlethread, multithread, gpu")
-# print("###############################")
-# print(" Precision choices for different compute capabilities :")
-# print(" sp (single precision), dp (double precision)")
-# print("###############################")
-
-# task_1 = input(" Enter the task for the first package : ")
-# package_1 = input(" Enter the choice of the package, package 1 : ")
-# p1_com_cap = input(" Enter the choice of the compute capability for package 1 : ")
-# p1_prec = input(" Enter the choice of the precision for package 1 : ")
-
-# task_2 = input("Enter the task for the second package : ")
-# package_2 = input(" Enter the choice of the package, package 2 : ")
-# p2_com_cap = input(" Enter the choice of the compute capability for package 2 : ")
-# p2_prec = input(" Enter the choice of the precision for package 2 : ")
-
-def abs_time(t1, p1, p1_cc, p1_pr, t2, p2, p2_cc, p2_pr, N_end):
-
- if t1 == "Heisenberg dynamics":
- t1 = "hdyn"
- elif t1 == "Random Quantum Circuit":
- t1 = "rqc"
- elif t1 == "Quantum Fourier Transform":
- t1 = "qft"
-
- if p1_cc == "Singlethread":
- p1_cc = "singlethread"
- elif p1_cc == "Multithread":
- p1_cc = "multithread"
- elif p1_cc == "GPU":
- p1_cc = "gpu"
-
- if p1_pr == "Single":
- p1_pr = "sp"
- elif p1_pr == "Double":
- p1_pr = "dp"
-
- if t2 == "Heisenberg dynamics":
- t2 = "hdyn"
- elif t2 == "Random Quantum Circuit":
- t2 = "rqc"
- elif t2 == "Quantum Fourier Transform":
- t2 = "qft"
-
- if p2_cc == "Singlethread":
- p2_cc = "singlethread"
- elif p2_cc == "Multithread":
- p2_cc = "multithread"
- elif p2_cc == "GPU":
- p2_cc = "gpu"
-
- if p2_pr == "Single":
- p2_pr = "sp"
- elif p2_pr == "Double":
- p2_pr = "dp"
-
- if t1 == 'hdyn' or t1 == 'qft':
- N_arr_t1 = np.arange(6, N_end, 2)
- elif t1 == 'rqc':
- N_arr_t1 = np.arange(12, N_end, 2)
-
- if t2 == 'hdyn' or t2 == 'qft':
- N_arr_t2 = np.arange(6, N_end, 2)
- elif t2 == 'rqc':
- N_arr_t2 = np.arange(12, N_end, 2)
-
- dir = os.getcwd()
- data_file_p1 = dir + '/data/{}/{}_{}_{}.h5'.format(t1, p1, p1_cc, p1_pr)
- data_file_p2 = dir + '/data/{}/{}_{}_{}.h5'.format(t2, p2, p2_cc, p2_pr)
-
- fig, ax = plt.subplots()
-
- mr.Md(f"TtS performance of the selected options")
-
- if os.path.isfile(data_file_p1) and os.path.isfile(data_file_p2):
- h5f_1 = h5py.File(data_file_p1, 'r')
- dat_1 = h5f_1[storage_dict[p1]][:]
- h5f_1.close()
-
- h5f_2 = h5py.File(data_file_p2, 'r')
- dat_2 = h5f_2[storage_dict[p2]][:]
- h5f_2.close()
-
- plot_abs_data_n_arr(N_arr_t1, dat_1, p1+'_'+t1+'_'+p1_cc+'_'+p1_pr)
- plot_abs_data_n_arr(N_arr_t2, dat_2, p2+'_'+t2+'_'+p2_cc+'_'+p2_pr)
- # save_flag = input("Do you want to save the plot?")
- # if save_flag == "Y":
- # gen_settings(fig, ax, r"N (system size)", r"Time ($t_{package}$)", False, True, True, 10**-1, 10**5, "out", "perf_{}_{}_{}_{}_{}_{}_{}_{}.pdf".format(t1, p1, p1_cc, p1_pr, t2, p2, p2_cc, p2_pr))
- # else:
- if N_arr_t1[0] > N_arr_t2[0]:
- N_arr = N_arr_t2
- else:
- N_arr = N_arr_t1
-
- gen_settings(fig, ax, r"N (system size)", r"Time ($t_{package}$)", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**5, "out", None)
- else:
- mr.Md(f" Re-select the options as the requested configuration is not supported (check the table in the index page for supported configurations)")
-
-# abs_time(task_1, package_1, p1_com_cap, p1_prec, task_2, package_2, p2_com_cap, p2_prec)
-
-def relative_time_wrt_pack(t1, p1, p1_cc, p1_pr, t2, p2, p2_cc, p2_pr, N_end):
-
- mr.Md("___")
- mr.Md(f"Relative performance")
-
- if t1 == "Heisenberg dynamics":
- t1 = "hdyn"
- elif t1 == "Random Quantum Circuit":
- t1 = "rqc"
- elif t1 == "Quantum Fourier Transform":
- t1 = "qft"
-
- if p1_cc == "Singlethread":
- p1_cc = "singlethread"
- elif p1_cc == "Multithread":
- p1_cc = "multithread"
- elif p1_cc == "GPU":
- p1_cc = "gpu"
-
- if p1_pr == "Single":
- p1_pr = "sp"
- elif p1_pr == "Double":
- p1_pr = "dp"
-
- if t2 == "Heisenberg dynamics":
- t2 = "hdyn"
- elif t2 == "Random Quantum Circuit":
- t2 = "rqc"
- elif t2 == "Quantum Fourier Transform":
- t2 = "qft"
-
- if p2_cc == "Singlethread":
- p2_cc = "singlethread"
- elif p2_cc == "Multithread":
- p2_cc = "multithread"
- elif p2_cc == "GPU":
- p2_cc = "gpu"
-
- if p2_pr == "Single":
- p2_pr = "sp"
- elif p2_pr == "Double":
- p2_pr = "dp"
-
- if t1 == 'hdyn' or t1 == 'qft':
- N_arr_t1 = np.arange(6, N_end, 2)
- elif t1 == 'rqc':
- N_arr_t1 = np.arange(12, N_end, 2)
-
- if t2 == 'hdyn' or t2 == 'qft':
- N_arr_t2 = np.arange(6, N_end, 2)
- elif t2 == 'rqc':
- N_arr_t2 = np.arange(12, N_end, 2)
-
- fig, ax = plt.subplots()
-
- dir = os.getcwd()
- data_file_p1 = dir + '/data/{}/{}_{}_{}.h5'.format(t1, p1, p1_cc, p1_pr)
- data_file_p2 = dir + '/data/{}/{}_{}_{}.h5'.format(t2, p2, p2_cc, p2_pr)
-
- if os.path.isfile(data_file_p1) and os.path.isfile(data_file_p2):
-
- h5f_1 = h5py.File(data_file_p1, 'r')
- dat_1 = h5f_1[storage_dict[p1]][:]
- h5f_1.close()
-
- h5f_2 = h5py.File(data_file_p2, 'r')
- dat_2 = h5f_2[storage_dict[p2]][:]
- h5f_2.close()
-
- if np.sum(dat_1) > np.sum(dat_2):
- if N_arr_t1[0] > N_arr_t2[0]:
- dat_2 = dat_2[3:]
- N_arr = N_arr_t1
- elif N_arr_t1[0] < N_arr_t2[0]:
- dat_1 = dat_1[3:]
- N_arr = N_arr_t2
- else:
- N_arr = N_arr_t1
- plot_comp_data_n_arr(N_arr, dat_1, dat_2, p1+'_'+t1+'_'+p1_cc+'_'+p1_pr)
- plot_comp_data_n_arr(N_arr, dat_2, dat_2, p2+'_'+t2+'_'+p2_cc+'_'+p2_pr)
- # save_flag = input("Do you want to save the plot?")
- # if save_flag == "Y":
- # gen_settings(fig, ax, r"N (system size)", r"Relative time - " + p2, False, True, True, 10**-1, 10**3, "out", "relative_perf_{}_{}_{}_{}_{}_{}_{}_{}.pdf".format(t1, p1, p1_cc, p1_pr, t2, p2, p2_cc, p2_pr))
- # else:
- gen_settings(fig, ax, r"N (system size)", r"Relative time", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**3, "out", None)
- else:
- if N_arr_t1[0] > N_arr_t2[0]:
- dat_2 = dat_2[3:]
- N_arr = N_arr_t1
- elif N_arr_t1[0] < N_arr_t2[0]:
- dat_1 = dat_1[3:]
- N_arr = N_arr_t2
- else:
- N_arr = N_arr_t1
- plot_comp_data_n_arr(N_arr, dat_2, dat_1, p2+'_'+t2+'_'+p2_cc+'_'+p2_pr)
- plot_comp_data_n_arr(N_arr, dat_1, dat_1, p1+'_'+t1+'_'+p1_cc+'_'+p1_pr)
- # save_flag = input("Do you want to save the plot?")
- # if save_flag == "Y":
- # gen_settings(fig, ax, r"N (system size)", r"Relative time", False, True, True, 10**-1, 10**3, "out", "relative_perf_{}_{}_{}_{}_{}_{}_{}_{}.pdf".format(t1, p1, p1_cc, p1_pr, t2, p2, p2_cc, p2_pr))
- # else:
- gen_settings(fig, ax, r"N (system size)", r"Relative time", False, True, True, N_arr[0]-2, N_arr[-1], True, 10**-1, 10**3, "out", None)
-
-# relative_time_wrt_pack(task_1, package_1, p1_com_cap, p1_prec, task_2, package_2, p2_com_cap, p2_prec)
diff --git a/spaces/anurag629/botaniscan/app/chatbot/__init__.py b/spaces/anurag629/botaniscan/app/chatbot/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/dataset.py b/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/dataset.py
deleted file mode 100644
index 093efcd4fc70602a802a82bc50457d0a5d08b5e2..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/dataset.py
+++ /dev/null
@@ -1,246 +0,0 @@
-import os
-import numpy as np
-import PIL
-import torch
-from PIL import Image
-from torch.utils.data import Dataset, DataLoader, Sampler
-from torchvision import transforms
-from collections import defaultdict
-from random import shuffle, choices
-
-import random
-import tqdm
-from modules import devices, shared
-import re
-
-from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
-
-re_numbers_at_start = re.compile(r"^[-\d]+\s*")
-
-
-class DatasetEntry:
- def __init__(self, filename=None, filename_text=None, latent_dist=None, latent_sample=None, cond=None, cond_text=None, pixel_values=None, weight=None):
- self.filename = filename
- self.filename_text = filename_text
- self.weight = weight
- self.latent_dist = latent_dist
- self.latent_sample = latent_sample
- self.cond = cond
- self.cond_text = cond_text
- self.pixel_values = pixel_values
-
-
-class PersonalizedBase(Dataset):
- def __init__(self, data_root, width, height, repeats, flip_p=0.5, placeholder_token="*", model=None, cond_model=None, device=None, template_file=None, include_cond=False, batch_size=1, gradient_step=1, shuffle_tags=False, tag_drop_out=0, latent_sampling_method='once', varsize=False, use_weight=False):
- re_word = re.compile(shared.opts.dataset_filename_word_regex) if len(shared.opts.dataset_filename_word_regex) > 0 else None
-
- self.placeholder_token = placeholder_token
-
- self.flip = transforms.RandomHorizontalFlip(p=flip_p)
-
- self.dataset = []
-
- with open(template_file, "r") as file:
- lines = [x.strip() for x in file.readlines()]
-
- self.lines = lines
-
- assert data_root, 'dataset directory not specified'
- assert os.path.isdir(data_root), "Dataset directory doesn't exist"
- assert os.listdir(data_root), "Dataset directory is empty"
-
- self.image_paths = [os.path.join(data_root, file_path) for file_path in os.listdir(data_root)]
-
- self.shuffle_tags = shuffle_tags
- self.tag_drop_out = tag_drop_out
- groups = defaultdict(list)
-
- print("Preparing dataset...")
- for path in tqdm.tqdm(self.image_paths):
- alpha_channel = None
- if shared.state.interrupted:
- raise Exception("interrupted")
- try:
- image = Image.open(path)
- #Currently does not work for single color transparency
- #We would need to read image.info['transparency'] for that
- if use_weight and 'A' in image.getbands():
- alpha_channel = image.getchannel('A')
- image = image.convert('RGB')
- if not varsize:
- image = image.resize((width, height), PIL.Image.BICUBIC)
- except Exception:
- continue
-
- text_filename = os.path.splitext(path)[0] + ".txt"
- filename = os.path.basename(path)
-
- if os.path.exists(text_filename):
- with open(text_filename, "r", encoding="utf8") as file:
- filename_text = file.read()
- else:
- filename_text = os.path.splitext(filename)[0]
- filename_text = re.sub(re_numbers_at_start, '', filename_text)
- if re_word:
- tokens = re_word.findall(filename_text)
- filename_text = (shared.opts.dataset_filename_join_string or "").join(tokens)
-
- npimage = np.array(image).astype(np.uint8)
- npimage = (npimage / 127.5 - 1.0).astype(np.float32)
-
- torchdata = torch.from_numpy(npimage).permute(2, 0, 1).to(device=device, dtype=torch.float32)
- latent_sample = None
-
- with devices.autocast():
- latent_dist = model.encode_first_stage(torchdata.unsqueeze(dim=0))
-
- #Perform latent sampling, even for random sampling.
- #We need the sample dimensions for the weights
- if latent_sampling_method == "deterministic":
- if isinstance(latent_dist, DiagonalGaussianDistribution):
- # Works only for DiagonalGaussianDistribution
- latent_dist.std = 0
- else:
- latent_sampling_method = "once"
- latent_sample = model.get_first_stage_encoding(latent_dist).squeeze().to(devices.cpu)
-
- if use_weight and alpha_channel is not None:
- channels, *latent_size = latent_sample.shape
- weight_img = alpha_channel.resize(latent_size)
- npweight = np.array(weight_img).astype(np.float32)
- #Repeat for every channel in the latent sample
- weight = torch.tensor([npweight] * channels).reshape([channels] + latent_size)
- #Normalize the weight to a minimum of 0 and a mean of 1, that way the loss will be comparable to default.
- weight -= weight.min()
- weight /= weight.mean()
- elif use_weight:
- #If an image does not have a alpha channel, add a ones weight map anyway so we can stack it later
- weight = torch.ones(latent_sample.shape)
- else:
- weight = None
-
- if latent_sampling_method == "random":
- entry = DatasetEntry(filename=path, filename_text=filename_text, latent_dist=latent_dist, weight=weight)
- else:
- entry = DatasetEntry(filename=path, filename_text=filename_text, latent_sample=latent_sample, weight=weight)
-
- if not (self.tag_drop_out != 0 or self.shuffle_tags):
- entry.cond_text = self.create_text(filename_text)
-
- if include_cond and not (self.tag_drop_out != 0 or self.shuffle_tags):
- with devices.autocast():
- entry.cond = cond_model([entry.cond_text]).to(devices.cpu).squeeze(0)
- groups[image.size].append(len(self.dataset))
- self.dataset.append(entry)
- del torchdata
- del latent_dist
- del latent_sample
- del weight
-
- self.length = len(self.dataset)
- self.groups = list(groups.values())
- assert self.length > 0, "No images have been found in the dataset."
- self.batch_size = min(batch_size, self.length)
- self.gradient_step = min(gradient_step, self.length // self.batch_size)
- self.latent_sampling_method = latent_sampling_method
-
- if len(groups) > 1:
- print("Buckets:")
- for (w, h), ids in sorted(groups.items(), key=lambda x: x[0]):
- print(f" {w}x{h}: {len(ids)}")
- print()
-
- def create_text(self, filename_text):
- text = random.choice(self.lines)
- tags = filename_text.split(',')
- if self.tag_drop_out != 0:
- tags = [t for t in tags if random.random() > self.tag_drop_out]
- if self.shuffle_tags:
- random.shuffle(tags)
- text = text.replace("[filewords]", ','.join(tags))
- text = text.replace("[name]", self.placeholder_token)
- return text
-
- def __len__(self):
- return self.length
-
- def __getitem__(self, i):
- entry = self.dataset[i]
- if self.tag_drop_out != 0 or self.shuffle_tags:
- entry.cond_text = self.create_text(entry.filename_text)
- if self.latent_sampling_method == "random":
- entry.latent_sample = shared.sd_model.get_first_stage_encoding(entry.latent_dist).to(devices.cpu)
- return entry
-
-
-class GroupedBatchSampler(Sampler):
- def __init__(self, data_source: PersonalizedBase, batch_size: int):
- super().__init__(data_source)
-
- n = len(data_source)
- self.groups = data_source.groups
- self.len = n_batch = n // batch_size
- expected = [len(g) / n * n_batch * batch_size for g in data_source.groups]
- self.base = [int(e) // batch_size for e in expected]
- self.n_rand_batches = nrb = n_batch - sum(self.base)
- self.probs = [e%batch_size/nrb/batch_size if nrb>0 else 0 for e in expected]
- self.batch_size = batch_size
-
- def __len__(self):
- return self.len
-
- def __iter__(self):
- b = self.batch_size
-
- for g in self.groups:
- shuffle(g)
-
- batches = []
- for g in self.groups:
- batches.extend(g[i*b:(i+1)*b] for i in range(len(g) // b))
- for _ in range(self.n_rand_batches):
- rand_group = choices(self.groups, self.probs)[0]
- batches.append(choices(rand_group, k=b))
-
- shuffle(batches)
-
- yield from batches
-
-
-class PersonalizedDataLoader(DataLoader):
- def __init__(self, dataset, latent_sampling_method="once", batch_size=1, pin_memory=False):
- super(PersonalizedDataLoader, self).__init__(dataset, batch_sampler=GroupedBatchSampler(dataset, batch_size), pin_memory=pin_memory)
- if latent_sampling_method == "random":
- self.collate_fn = collate_wrapper_random
- else:
- self.collate_fn = collate_wrapper
-
-
-class BatchLoader:
- def __init__(self, data):
- self.cond_text = [entry.cond_text for entry in data]
- self.cond = [entry.cond for entry in data]
- self.latent_sample = torch.stack([entry.latent_sample for entry in data]).squeeze(1)
- if all(entry.weight is not None for entry in data):
- self.weight = torch.stack([entry.weight for entry in data]).squeeze(1)
- else:
- self.weight = None
- #self.emb_index = [entry.emb_index for entry in data]
- #print(self.latent_sample.device)
-
- def pin_memory(self):
- self.latent_sample = self.latent_sample.pin_memory()
- return self
-
-def collate_wrapper(batch):
- return BatchLoader(batch)
-
-class BatchLoaderRandom(BatchLoader):
- def __init__(self, data):
- super().__init__(data)
-
- def pin_memory(self):
- return self
-
-def collate_wrapper_random(batch):
- return BatchLoaderRandom(batch)
\ No newline at end of file
diff --git a/spaces/archietram/Predict_Age_and_BMI_from_Images/app.py b/spaces/archietram/Predict_Age_and_BMI_from_Images/app.py
deleted file mode 100644
index f58db78a70bb1236081900f1fb7bf1274d78270c..0000000000000000000000000000000000000000
--- a/spaces/archietram/Predict_Age_and_BMI_from_Images/app.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-
-def get_bmi(): return _
-def get_age(): return _
-def combine_loss(): return _
-def age_loss(): return _
-def bmi_loss(): return _
-
-learn = load_learner("export.pkl")
-
-def classify_image(img):
- tst_dl = learn.dls.test_dl([img], num_workers = 0)
- preds,_ = learn.get_preds(dl=tst_dl)
- result_text = "This person is " + str(round(preds[0][0].item(), 0)) + " years old with a BMI of " + str(round(preds[0][1].item(), 1)) + " kg/m^2"
- return result_text
-
-image = gr.inputs.Image()
-examples = ['A00147.png','K86344.png','A00360.png', "R89614.png", 'A01681.png', 'R79591.png','R86556.png', "R43263.png", 'Y15554.png', "X78069.png"]
-title = 'Predict Age and Body Mass Index from a Picture'
-description = 'This app predicts the age and BMI of a person just from their face.'
-article = "Author: Archie Tram . "
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs="text", examples=examples, title=title, description=description, article=article)
-intf.launch(inline=False)
diff --git a/spaces/arnavkartikeya/SCRIPture-final/app.py b/spaces/arnavkartikeya/SCRIPture-final/app.py
deleted file mode 100644
index d14af69fbc6d4feafcf2cdb8c9ab73dcd96fb0bf..0000000000000000000000000000000000000000
--- a/spaces/arnavkartikeya/SCRIPture-final/app.py
+++ /dev/null
@@ -1,209 +0,0 @@
-from PIL import Image
-import requests
-import torch
-from torchvision import transforms
-import os
-from torchvision.transforms.functional import InterpolationMode
-import matplotlib.pyplot as plt
-import matplotlib.image as mpimg
-import cohere
-import base64
-import gradio as gr
-import string
-import openai
-
-
-def cap(t):
- indices = []
- tem = ""
- for j in range(len(t)):
- if t[j] == "." or t[j] == "!" or t[j] == "?":
- if j+2 < len(t):
- indices.append(j+2)
- for j in range(len(t)):
- if j in indices:
- tem += t[j].upper()
- else:
- tem += t[j]
- return tem
-def processing(s):
- #create a string[] that holds every sentence
- arr = []
- temp = ""
- fin = ""
- for i in range(len(s)):
- temp += s[i]
- if s[i] == "\n":
- arr.append(temp)
- temp = ""
- if i == len(s)-1:
- arr.append(temp)
- for i in arr:
- t = i
- t = t.strip()
- temp = ""
- #make the first element of the string be the first alpha character
- ind = 0
- for j in range(len(t)):
- if t[j].isalpha():
- ind = j
- break
- t = t[ind:]
- t = t.capitalize()
- # capitalize all words after punctuation
- t = cap(t)
- #remove some punctuation
- t = t.replace("(", "")
- t = t.replace(")", "")
- t = t.replace("&", "")
- t = t.replace("#", "")
- t = t.replace("_", "")
-
- #remove punctuation if it is not following an alpha character
- temp = ""
- for j in range(len(t)):
- if t[j] in string.punctuation:
- if t[j-1] not in string.punctuation:
- temp += t[j]
- else:
- temp += t[j]
- fin += temp + "\n"
- #find the last punctuation in fin and return everything before that
- ind = 0
- for i in range(len(fin)):
- if fin[i] == "." or fin[i] == "?" or fin[i] == "!":
- ind = i
- if(ind != 0 and ind != len(fin) - 1):
- return fin[:ind+1]
- else:
- return fin
-
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
-from models.blip import blip_decoder
-
-image_size = 384
-transform = transforms.Compose([
- transforms.Resize((image_size,image_size),interpolation=InterpolationMode.BICUBIC),
- transforms.ToTensor(),
- transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
- ])
-
-model_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth'
-
-model = blip_decoder(pretrained=model_url, image_size=384, vit='large')
-model.eval()
-model = model.to(device)
-
-
-from models.blip_vqa import blip_vqa
-
-image_size_vq = 480
-transform_vq = transforms.Compose([
- transforms.Resize((image_size_vq,image_size_vq),interpolation=InterpolationMode.BICUBIC),
- transforms.ToTensor(),
- transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
- ])
-
-model_url_vq = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model*_vqa.pth'
-
-model_vq = blip_vqa(pretrained=model_url_vq, image_size=480, vit='base')
-model_vq.eval()
-model_vq = model_vq.to(device)
-
-
-
-def inference(raw_image, model_n, question="", strategy=""):
- if model_n == 'Image Captioning':
- image = transform(raw_image).unsqueeze(0).to(device)
- with torch.no_grad():
- if strategy == "Beam search":
- caption = model.generate(image, sample=False, num_beams=3, max_length=20, min_length=5)
- else:
- caption = model.generate(image, sample=True, top_p=0.9, max_length=20, min_length=5)
- return 'caption: '+caption[0]
-
- else:
- image_vq = transform_vq(raw_image).unsqueeze(0).to(device)
- with torch.no_grad():
- answer = model_vq(image_vq, question, train=False, inference='generate')
- return 'answer: '+answer[0]
-
-#get caption for a single iamge
-def get_caption(image_path):
- img = Image.open(image_path)
- return inference(img, "Image Captioning")[9:]
-
-def display(image_path):
- img = mpimg.imread(image_path)
- img = Image.open(image_path)
- plt.imshow(img)
- print("Caption: " + get_caption(image_path))
-
-#returns a dictionary with key -> img_path and value -> caption
-def get_captions(img_directory, print_status=True):
- #key is img path, value is the caption
- captions = {}
- length = 0
- for file in os.listdir(img_directory):
- length+=1
- count = 0
- for file in os.listdir(img_directory):
- f = os.path.join(img_directory, file)
- captions[f] = inference(Image.open(f), "Image Captioning")
- if print_status:
- print("Images complete:", str(count) + "/" + str(length))
- print("Caption:", captions[f])
- return captions
-#writes dictionary to file, key and value seperated by ':'
-def write_to_file(filename, caption_dict):
- with open(filename, "w") as file:
- for i in caption_dict:
- file.write(i + ":" + caption_dict[i])
- file.close()
-
- # Text to Image API
-
-import requests
-import base64
-
-def get_image(prompt="Random monster"):
- openai.api_key = os.getenv("OPENAI_KEY")
- response = openai.Image.create(
- prompt = prompt + ", realistic fantasy style",
- n=1,
- size="256x256"
- )
- image_url = response['data'][0]['url']
-
- im = Image.open(requests.get(image_url, stream=True).raw)
- im.save("sample.png", "PNG")
-
- return im
-
-
-#add max tokens a slider
-
-def make_image_and_story(prompt):
- if(prompt is None or prompt == ""):
- img = get_image()
-
- caption = get_caption("sample.png")
-
- co = cohere.Client(os.getenv("COHERE_KEY"))
- response = co.generate(prompt=caption, model ='aeb523c3-a79c-48ba-9274-a12ac07492a2-ft', max_tokens=80)
-
- return Image.open("sample.png"), processing(response.generations[0].text)
- else:
- img = get_image(prompt)
-
- caption = get_caption("sample.png")
- caption += " " + prompt
-
- co = cohere.Client(os.getenv("COHERE_KEY"))
- response = co.generate(prompt=caption, model ='aeb523c3-a79c-48ba-9274-a12ac07492a2-ft', max_tokens=80)
-
- return Image.open("sample.png"), processing(response.generations[0].text)
-
-
-gr.Interface(fn=make_image_and_story, inputs="text", outputs=["image","text"],title='Fantasy Creature Generator').launch();
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/README.md b/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/README.md
deleted file mode 100644
index c073376e4eeda6d4b29cc31c50cb7e88ab42bb73..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/Wav2Lip/face_detection/README.md
+++ /dev/null
@@ -1 +0,0 @@
-The code for Face Detection in this folder has been taken from the wonderful [face_alignment](https://github.com/1adrianb/face-alignment) repository. This has been modified to take batches of faces at a time.
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/english.py b/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/english.py
deleted file mode 100644
index 4932042bc5b7e9c3fed75a03af66948e4225a2b0..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/whisper/whisper/normalizers/english.py
+++ /dev/null
@@ -1,550 +0,0 @@
-import json
-import os
-import re
-from fractions import Fraction
-from typing import Iterator, List, Match, Optional, Union
-
-from more_itertools import windowed
-
-from .basic import remove_symbols_and_diacritics
-
-
-class EnglishNumberNormalizer:
- """
- Convert any spelled-out numbers into arabic numbers, while handling:
-
- - remove any commas
- - keep the suffixes such as: `1960s`, `274th`, `32nd`, etc.
- - spell out currency symbols after the number. e.g. `$20 million` -> `20000000 dollars`
- - spell out `one` and `ones`
- - interpret successive single-digit numbers as nominal: `one oh one` -> `101`
- """
-
- def __init__(self):
- super().__init__()
-
- self.zeros = {"o", "oh", "zero"}
- self.ones = {
- name: i
- for i, name in enumerate(
- [
- "one",
- "two",
- "three",
- "four",
- "five",
- "six",
- "seven",
- "eight",
- "nine",
- "ten",
- "eleven",
- "twelve",
- "thirteen",
- "fourteen",
- "fifteen",
- "sixteen",
- "seventeen",
- "eighteen",
- "nineteen",
- ],
- start=1,
- )
- }
- self.ones_plural = {
- "sixes" if name == "six" else name + "s": (value, "s")
- for name, value in self.ones.items()
- }
- self.ones_ordinal = {
- "zeroth": (0, "th"),
- "first": (1, "st"),
- "second": (2, "nd"),
- "third": (3, "rd"),
- "fifth": (5, "th"),
- "twelfth": (12, "th"),
- **{
- name + ("h" if name.endswith("t") else "th"): (value, "th")
- for name, value in self.ones.items()
- if value > 3 and value != 5 and value != 12
- },
- }
- self.ones_suffixed = {**self.ones_plural, **self.ones_ordinal}
-
- self.tens = {
- "twenty": 20,
- "thirty": 30,
- "forty": 40,
- "fifty": 50,
- "sixty": 60,
- "seventy": 70,
- "eighty": 80,
- "ninety": 90,
- }
- self.tens_plural = {
- name.replace("y", "ies"): (value, "s") for name, value in self.tens.items()
- }
- self.tens_ordinal = {
- name.replace("y", "ieth"): (value, "th")
- for name, value in self.tens.items()
- }
- self.tens_suffixed = {**self.tens_plural, **self.tens_ordinal}
-
- self.multipliers = {
- "hundred": 100,
- "thousand": 1_000,
- "million": 1_000_000,
- "billion": 1_000_000_000,
- "trillion": 1_000_000_000_000,
- "quadrillion": 1_000_000_000_000_000,
- "quintillion": 1_000_000_000_000_000_000,
- "sextillion": 1_000_000_000_000_000_000_000,
- "septillion": 1_000_000_000_000_000_000_000_000,
- "octillion": 1_000_000_000_000_000_000_000_000_000,
- "nonillion": 1_000_000_000_000_000_000_000_000_000_000,
- "decillion": 1_000_000_000_000_000_000_000_000_000_000_000,
- }
- self.multipliers_plural = {
- name + "s": (value, "s") for name, value in self.multipliers.items()
- }
- self.multipliers_ordinal = {
- name + "th": (value, "th") for name, value in self.multipliers.items()
- }
- self.multipliers_suffixed = {
- **self.multipliers_plural,
- **self.multipliers_ordinal,
- }
- self.decimals = {*self.ones, *self.tens, *self.zeros}
-
- self.preceding_prefixers = {
- "minus": "-",
- "negative": "-",
- "plus": "+",
- "positive": "+",
- }
- self.following_prefixers = {
- "pound": "£",
- "pounds": "£",
- "euro": "€",
- "euros": "€",
- "dollar": "$",
- "dollars": "$",
- "cent": "¢",
- "cents": "¢",
- }
- self.prefixes = set(
- list(self.preceding_prefixers.values())
- + list(self.following_prefixers.values())
- )
- self.suffixers = {
- "per": {"cent": "%"},
- "percent": "%",
- }
- self.specials = {"and", "double", "triple", "point"}
-
- self.words = set(
- [
- key
- for mapping in [
- self.zeros,
- self.ones,
- self.ones_suffixed,
- self.tens,
- self.tens_suffixed,
- self.multipliers,
- self.multipliers_suffixed,
- self.preceding_prefixers,
- self.following_prefixers,
- self.suffixers,
- self.specials,
- ]
- for key in mapping
- ]
- )
- self.literal_words = {"one", "ones"}
-
- def process_words(self, words: List[str]) -> Iterator[str]:
- prefix: Optional[str] = None
- value: Optional[Union[str, int]] = None
- skip = False
-
- def to_fraction(s: str):
- try:
- return Fraction(s)
- except ValueError:
- return None
-
- def output(result: Union[str, int]):
- nonlocal prefix, value
- result = str(result)
- if prefix is not None:
- result = prefix + result
- value = None
- prefix = None
- return result
-
- if len(words) == 0:
- return
-
- for prev, current, next in windowed([None] + words + [None], 3):
- if skip:
- skip = False
- continue
-
- next_is_numeric = next is not None and re.match(r"^\d+(\.\d+)?$", next)
- has_prefix = current[0] in self.prefixes
- current_without_prefix = current[1:] if has_prefix else current
- if re.match(r"^\d+(\.\d+)?$", current_without_prefix):
- # arabic numbers (potentially with signs and fractions)
- f = to_fraction(current_without_prefix)
- assert f is not None
- if value is not None:
- if isinstance(value, str) and value.endswith("."):
- # concatenate decimals / ip address components
- value = str(value) + str(current)
- continue
- else:
- yield output(value)
-
- prefix = current[0] if has_prefix else prefix
- if f.denominator == 1:
- value = f.numerator # store integers as int
- else:
- value = current_without_prefix
- elif current not in self.words:
- # non-numeric words
- if value is not None:
- yield output(value)
- yield output(current)
- elif current in self.zeros:
- value = str(value or "") + "0"
- elif current in self.ones:
- ones = self.ones[current]
-
- if value is None:
- value = ones
- elif isinstance(value, str) or prev in self.ones:
- if (
- prev in self.tens and ones < 10
- ): # replace the last zero with the digit
- assert value[-1] == "0"
- value = value[:-1] + str(ones)
- else:
- value = str(value) + str(ones)
- elif ones < 10:
- if value % 10 == 0:
- value += ones
- else:
- value = str(value) + str(ones)
- else: # eleven to nineteen
- if value % 100 == 0:
- value += ones
- else:
- value = str(value) + str(ones)
- elif current in self.ones_suffixed:
- # ordinal or cardinal; yield the number right away
- ones, suffix = self.ones_suffixed[current]
- if value is None:
- yield output(str(ones) + suffix)
- elif isinstance(value, str) or prev in self.ones:
- if prev in self.tens and ones < 10:
- assert value[-1] == "0"
- yield output(value[:-1] + str(ones) + suffix)
- else:
- yield output(str(value) + str(ones) + suffix)
- elif ones < 10:
- if value % 10 == 0:
- yield output(str(value + ones) + suffix)
- else:
- yield output(str(value) + str(ones) + suffix)
- else: # eleven to nineteen
- if value % 100 == 0:
- yield output(str(value + ones) + suffix)
- else:
- yield output(str(value) + str(ones) + suffix)
- value = None
- elif current in self.tens:
- tens = self.tens[current]
- if value is None:
- value = tens
- elif isinstance(value, str):
- value = str(value) + str(tens)
- else:
- if value % 100 == 0:
- value += tens
- else:
- value = str(value) + str(tens)
- elif current in self.tens_suffixed:
- # ordinal or cardinal; yield the number right away
- tens, suffix = self.tens_suffixed[current]
- if value is None:
- yield output(str(tens) + suffix)
- elif isinstance(value, str):
- yield output(str(value) + str(tens) + suffix)
- else:
- if value % 100 == 0:
- yield output(str(value + tens) + suffix)
- else:
- yield output(str(value) + str(tens) + suffix)
- elif current in self.multipliers:
- multiplier = self.multipliers[current]
- if value is None:
- value = multiplier
- elif isinstance(value, str) or value == 0:
- f = to_fraction(value)
- p = f * multiplier if f is not None else None
- if f is not None and p.denominator == 1:
- value = p.numerator
- else:
- yield output(value)
- value = multiplier
- else:
- before = value // 1000 * 1000
- residual = value % 1000
- value = before + residual * multiplier
- elif current in self.multipliers_suffixed:
- multiplier, suffix = self.multipliers_suffixed[current]
- if value is None:
- yield output(str(multiplier) + suffix)
- elif isinstance(value, str):
- f = to_fraction(value)
- p = f * multiplier if f is not None else None
- if f is not None and p.denominator == 1:
- yield output(str(p.numerator) + suffix)
- else:
- yield output(value)
- yield output(str(multiplier) + suffix)
- else: # int
- before = value // 1000 * 1000
- residual = value % 1000
- value = before + residual * multiplier
- yield output(str(value) + suffix)
- value = None
- elif current in self.preceding_prefixers:
- # apply prefix (positive, minus, etc.) if it precedes a number
- if value is not None:
- yield output(value)
-
- if next in self.words or next_is_numeric:
- prefix = self.preceding_prefixers[current]
- else:
- yield output(current)
- elif current in self.following_prefixers:
- # apply prefix (dollars, cents, etc.) only after a number
- if value is not None:
- prefix = self.following_prefixers[current]
- yield output(value)
- else:
- yield output(current)
- elif current in self.suffixers:
- # apply suffix symbols (percent -> '%')
- if value is not None:
- suffix = self.suffixers[current]
- if isinstance(suffix, dict):
- if next in suffix:
- yield output(str(value) + suffix[next])
- skip = True
- else:
- yield output(value)
- yield output(current)
- else:
- yield output(str(value) + suffix)
- else:
- yield output(current)
- elif current in self.specials:
- if next not in self.words and not next_is_numeric:
- # apply special handling only if the next word can be numeric
- if value is not None:
- yield output(value)
- yield output(current)
- elif current == "and":
- # ignore "and" after hundreds, thousands, etc.
- if prev not in self.multipliers:
- if value is not None:
- yield output(value)
- yield output(current)
- elif current == "double" or current == "triple":
- if next in self.ones or next in self.zeros:
- repeats = 2 if current == "double" else 3
- ones = self.ones.get(next, 0)
- value = str(value or "") + str(ones) * repeats
- skip = True
- else:
- if value is not None:
- yield output(value)
- yield output(current)
- elif current == "point":
- if next in self.decimals or next_is_numeric:
- value = str(value or "") + "."
- else:
- # should all have been covered at this point
- raise ValueError(f"Unexpected token: {current}")
- else:
- # all should have been covered at this point
- raise ValueError(f"Unexpected token: {current}")
-
- if value is not None:
- yield output(value)
-
- def preprocess(self, s: str):
- # replace " and a half" with " point five"
- results = []
-
- segments = re.split(r"\band\s+a\s+half\b", s)
- for i, segment in enumerate(segments):
- if len(segment.strip()) == 0:
- continue
- if i == len(segments) - 1:
- results.append(segment)
- else:
- results.append(segment)
- last_word = segment.rsplit(maxsplit=2)[-1]
- if last_word in self.decimals or last_word in self.multipliers:
- results.append("point five")
- else:
- results.append("and a half")
-
- s = " ".join(results)
-
- # put a space at number/letter boundary
- s = re.sub(r"([a-z])([0-9])", r"\1 \2", s)
- s = re.sub(r"([0-9])([a-z])", r"\1 \2", s)
-
- # but remove spaces which could be a suffix
- s = re.sub(r"([0-9])\s+(st|nd|rd|th|s)\b", r"\1\2", s)
-
- return s
-
- def postprocess(self, s: str):
- def combine_cents(m: Match):
- try:
- currency = m.group(1)
- integer = m.group(2)
- cents = int(m.group(3))
- return f"{currency}{integer}.{cents:02d}"
- except ValueError:
- return m.string
-
- def extract_cents(m: Match):
- try:
- return f"¢{int(m.group(1))}"
- except ValueError:
- return m.string
-
- # apply currency postprocessing; "$2 and ¢7" -> "$2.07"
- s = re.sub(r"([€£$])([0-9]+) (?:and )?¢([0-9]{1,2})\b", combine_cents, s)
- s = re.sub(r"[€£$]0.([0-9]{1,2})\b", extract_cents, s)
-
- # write "one(s)" instead of "1(s)", just for the readability
- s = re.sub(r"\b1(s?)\b", r"one\1", s)
-
- return s
-
- def __call__(self, s: str):
- s = self.preprocess(s)
- s = " ".join(word for word in self.process_words(s.split()) if word is not None)
- s = self.postprocess(s)
-
- return s
-
-
-class EnglishSpellingNormalizer:
- """
- Applies British-American spelling mappings as listed in [1].
-
- [1] https://www.tysto.com/uk-us-spelling-list.html
- """
-
- def __init__(self):
- mapping_path = os.path.join(os.path.dirname(__file__), "english.json")
- self.mapping = json.load(open(mapping_path))
-
- def __call__(self, s: str):
- return " ".join(self.mapping.get(word, word) for word in s.split())
-
-
-class EnglishTextNormalizer:
- def __init__(self):
- self.ignore_patterns = r"\b(hmm|mm|mhm|mmm|uh|um)\b"
- self.replacers = {
- # common contractions
- r"\bwon't\b": "will not",
- r"\bcan't\b": "can not",
- r"\blet's\b": "let us",
- r"\bain't\b": "aint",
- r"\by'all\b": "you all",
- r"\bwanna\b": "want to",
- r"\bgotta\b": "got to",
- r"\bgonna\b": "going to",
- r"\bi'ma\b": "i am going to",
- r"\bimma\b": "i am going to",
- r"\bwoulda\b": "would have",
- r"\bcoulda\b": "could have",
- r"\bshoulda\b": "should have",
- r"\bma'am\b": "madam",
- # contractions in titles/prefixes
- r"\bmr\b": "mister ",
- r"\bmrs\b": "missus ",
- r"\bst\b": "saint ",
- r"\bdr\b": "doctor ",
- r"\bprof\b": "professor ",
- r"\bcapt\b": "captain ",
- r"\bgov\b": "governor ",
- r"\bald\b": "alderman ",
- r"\bgen\b": "general ",
- r"\bsen\b": "senator ",
- r"\brep\b": "representative ",
- r"\bpres\b": "president ",
- r"\brev\b": "reverend ",
- r"\bhon\b": "honorable ",
- r"\basst\b": "assistant ",
- r"\bassoc\b": "associate ",
- r"\blt\b": "lieutenant ",
- r"\bcol\b": "colonel ",
- r"\bjr\b": "junior ",
- r"\bsr\b": "senior ",
- r"\besq\b": "esquire ",
- # prefect tenses, ideally it should be any past participles, but it's harder..
- r"'d been\b": " had been",
- r"'s been\b": " has been",
- r"'d gone\b": " had gone",
- r"'s gone\b": " has gone",
- r"'d done\b": " had done", # "'s done" is ambiguous
- r"'s got\b": " has got",
- # general contractions
- r"n't\b": " not",
- r"'re\b": " are",
- r"'s\b": " is",
- r"'d\b": " would",
- r"'ll\b": " will",
- r"'t\b": " not",
- r"'ve\b": " have",
- r"'m\b": " am",
- }
- self.standardize_numbers = EnglishNumberNormalizer()
- self.standardize_spellings = EnglishSpellingNormalizer()
-
- def __call__(self, s: str):
- s = s.lower()
-
- s = re.sub(r"[<\[][^>\]]*[>\]]", "", s) # remove words between brackets
- s = re.sub(r"\(([^)]+?)\)", "", s) # remove words between parenthesis
- s = re.sub(self.ignore_patterns, "", s)
- s = re.sub(r"\s+'", "'", s) # when there's a space before an apostrophe
-
- for pattern, replacement in self.replacers.items():
- s = re.sub(pattern, replacement, s)
-
- s = re.sub(r"(\d),(\d)", r"\1\2", s) # remove commas between digits
- s = re.sub(r"\.([^0-9]|$)", r" \1", s) # remove periods not followed by numbers
- s = remove_symbols_and_diacritics(s, keep=".%$¢€£") # keep numeric symbols
-
- s = self.standardize_numbers(s)
- s = self.standardize_spellings(s)
-
- # now remove prefix/suffix symbols that are not preceded/followed by numbers
- s = re.sub(r"[.$¢€£]([^0-9])", r" \1", s)
- s = re.sub(r"([^0-9])%", r"\1 ", s)
-
- s = re.sub(r"\s+", " ", s) # replace any successive whitespaces with a space
-
- return s
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/__init__.py
deleted file mode 100644
index 216c000dc5ffc8e53cc9c596e420c1e67604d1aa..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__author__ = 'ericvergnaud'
diff --git a/spaces/avans06/whisper-webui-translate/app-shared.py b/spaces/avans06/whisper-webui-translate/app-shared.py
deleted file mode 100644
index 63cac1a8adaf90784c5f5f178f86243ad2149ee4..0000000000000000000000000000000000000000
--- a/spaces/avans06/whisper-webui-translate/app-shared.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Run the app with no audio file restrictions
-from app import create_ui
-from src.config import ApplicationConfig
-
-create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, share=True))
\ No newline at end of file
diff --git a/spaces/awacke1/DnD-Character-Sheet2/README.md b/spaces/awacke1/DnD-Character-Sheet2/README.md
deleted file mode 100644
index 9472ab2733767dc245932138ab15bd525d3cb9d1..0000000000000000000000000000000000000000
--- a/spaces/awacke1/DnD-Character-Sheet2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DnD Character Sheet2
-emoji: 📚
-colorFrom: gray
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/StreamlitCookies/README.md b/spaces/awacke1/StreamlitCookies/README.md
deleted file mode 100644
index 4132e2d34af9daadfa2078d055e9118a42edbde0..0000000000000000000000000000000000000000
--- a/spaces/awacke1/StreamlitCookies/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: StreamlitCookies
-emoji: 😻
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/awacke1/VideoSummaryYoutube3/summarize.py b/spaces/awacke1/VideoSummaryYoutube3/summarize.py
deleted file mode 100644
index 0053dde4348f24cc152a60c4d20f201e3b1f5482..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VideoSummaryYoutube3/summarize.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import traceback
-import sys
-
-from youtube_transcript_api import YouTubeTranscriptApi
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
-def Summarizer(link, model):
-
- video_id = link.split("=")[1]
-
- try:
- transcript = YouTubeTranscriptApi.get_transcript(video_id)
- FinalTranscript = ' '.join([i['text'] for i in transcript])
-
- if model == "Pegasus":
- checkpoint = "google/pegasus-large"
- elif model == "mT5":
- checkpoint = "csebuetnlp/mT5_multilingual_XLSum"
- elif model == "BART":
- checkpoint = "sshleifer/distilbart-cnn-12-6"
-
- tokenizer = AutoTokenizer.from_pretrained(checkpoint)
- model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
-
-
- inputs = tokenizer(FinalTranscript,
- max_length=1024,
- truncation=True,
- return_tensors="pt")
-
- summary_ids = model.generate(inputs["input_ids"])
- summary = tokenizer.batch_decode(summary_ids,
- skip_special_tokens=True,
- clean_up_tokenization_spaces=False)
-
-
- return summary[0]
-
-
- except Exception:
- print(traceback.format_exc())
- # or
- print(sys.exc_info()[2])
\ No newline at end of file
diff --git a/spaces/awsaf49/gcvit-tf/gcvit/layers/window.py b/spaces/awsaf49/gcvit-tf/gcvit/layers/window.py
deleted file mode 100644
index 596b0083db86b40f3b03d8bb22d78164096d471f..0000000000000000000000000000000000000000
--- a/spaces/awsaf49/gcvit-tf/gcvit/layers/window.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import tensorflow as tf
-
-def window_partition(x, window_size):
- B, H, W, C = tf.unstack(tf.shape(x), num=4)
- x = tf.reshape(x, shape=[-1, H // window_size, window_size, W // window_size, window_size, C])
- x = tf.transpose(x, perm=[0, 1, 3, 2, 4, 5])
- windows = tf.reshape(x, shape=[-1, window_size, window_size, C])
- return windows
-
-
-def window_reverse(windows, window_size, H, W, C):
- x = tf.reshape(windows, shape=[-1, H // window_size, W // window_size, window_size, window_size, C])
- x = tf.transpose(x, perm=[0, 1, 3, 2, 4, 5])
- x = tf.reshape(x, shape=[-1, H, W, C])
- return x
\ No newline at end of file
diff --git a/spaces/aziz7751/lan2lan/README.md b/spaces/aziz7751/lan2lan/README.md
deleted file mode 100644
index f150e177f1b57cf08e7ca4dabc4824d18704e8e6..0000000000000000000000000000000000000000
--- a/spaces/aziz7751/lan2lan/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Lan2lan
-emoji: 💻
-colorFrom: green
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/draco/README.md b/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/draco/README.md
deleted file mode 100644
index 830a7f251ce8fc3deb120364e30510c78fa80046..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/draco/README.md
+++ /dev/null
@@ -1,32 +0,0 @@
-# Draco 3D Data Compression
-
-Draco is an open-source library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
-
-[Website](https://google.github.io/draco/) | [GitHub](https://github.com/google/draco)
-
-## Contents
-
-This folder contains three utilities:
-
-* `draco_decoder.js` — Emscripten-compiled decoder, compatible with any modern browser.
-* `draco_decoder.wasm` — WebAssembly decoder, compatible with newer browsers and devices.
-* `draco_wasm_wrapper.js` — JavaScript wrapper for the WASM decoder.
-
-Each file is provided in two variations:
-
-* **Default:** Latest stable builds, tracking the project's [master branch](https://github.com/google/draco).
-* **glTF:** Builds targeted by the [glTF mesh compression extension](https://github.com/KhronosGroup/glTF/tree/master/extensions/2.0/Khronos/KHR_draco_mesh_compression), tracking the [corresponding Draco branch](https://github.com/google/draco/tree/gltf_2.0_draco_extension).
-
-Either variation may be used with `THREE.DRACOLoader`:
-
-```js
-THREE.DRACOLoader.setDecoderPath('path/to/decoders/');
-THREE.DRACOLoader.setDecoderConfig({type: 'js'}); // (Optional) Override detection of WASM support.
-var dracoLoader = new THREE.DRACOLoader();
-```
-
-Further [documentation on GitHub](https://github.com/google/draco/tree/master/javascript/example#static-loading-javascript-decoder).
-
-## License
-
-[Apache License 2.0](https://github.com/google/draco/blob/master/LICENSE)
diff --git a/spaces/bigslime/stablediffusion-infinity/convert_checkpoint.py b/spaces/bigslime/stablediffusion-infinity/convert_checkpoint.py
deleted file mode 100644
index 34efcf1ab17190b8b140f02e9ff3451daf2c6f9e..0000000000000000000000000000000000000000
--- a/spaces/bigslime/stablediffusion-infinity/convert_checkpoint.py
+++ /dev/null
@@ -1,706 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py
-""" Conversion script for the LDM checkpoints. """
-
-import argparse
-import os
-
-import torch
-
-
-try:
- from omegaconf import OmegaConf
-except ImportError:
- raise ImportError(
- "OmegaConf is required to convert the LDM checkpoints. Please install it with `pip install OmegaConf`."
- )
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- LDMTextToImagePipeline,
- LMSDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionPipeline,
- UNet2DConditionModel,
-)
-from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import LDMBertConfig, LDMBertModel
-from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
-from transformers import AutoFeatureExtractor, BertTokenizerFast, CLIPTextModel, CLIPTokenizer
-
-
-def shave_segments(path, n_shave_prefix_segments=1):
- """
- Removes segments. Positive values shave the first segments, negative shave the last segments.
- """
- if n_shave_prefix_segments >= 0:
- return ".".join(path.split(".")[n_shave_prefix_segments:])
- else:
- return ".".join(path.split(".")[:n_shave_prefix_segments])
-
-
-def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
- """
- Updates paths inside resnets to the new naming scheme (local renaming)
- """
- mapping = []
- for old_item in old_list:
- new_item = old_item.replace("in_layers.0", "norm1")
- new_item = new_item.replace("in_layers.2", "conv1")
-
- new_item = new_item.replace("out_layers.0", "norm2")
- new_item = new_item.replace("out_layers.3", "conv2")
-
- new_item = new_item.replace("emb_layers.1", "time_emb_proj")
- new_item = new_item.replace("skip_connection", "conv_shortcut")
-
- new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
-
- mapping.append({"old": old_item, "new": new_item})
-
- return mapping
-
-
-def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0):
- """
- Updates paths inside resnets to the new naming scheme (local renaming)
- """
- mapping = []
- for old_item in old_list:
- new_item = old_item
-
- new_item = new_item.replace("nin_shortcut", "conv_shortcut")
- new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
-
- mapping.append({"old": old_item, "new": new_item})
-
- return mapping
-
-
-def renew_attention_paths(old_list, n_shave_prefix_segments=0):
- """
- Updates paths inside attentions to the new naming scheme (local renaming)
- """
- mapping = []
- for old_item in old_list:
- new_item = old_item
-
- # new_item = new_item.replace('norm.weight', 'group_norm.weight')
- # new_item = new_item.replace('norm.bias', 'group_norm.bias')
-
- # new_item = new_item.replace('proj_out.weight', 'proj_attn.weight')
- # new_item = new_item.replace('proj_out.bias', 'proj_attn.bias')
-
- # new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
-
- mapping.append({"old": old_item, "new": new_item})
-
- return mapping
-
-
-def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0):
- """
- Updates paths inside attentions to the new naming scheme (local renaming)
- """
- mapping = []
- for old_item in old_list:
- new_item = old_item
-
- new_item = new_item.replace("norm.weight", "group_norm.weight")
- new_item = new_item.replace("norm.bias", "group_norm.bias")
-
- new_item = new_item.replace("q.weight", "query.weight")
- new_item = new_item.replace("q.bias", "query.bias")
-
- new_item = new_item.replace("k.weight", "key.weight")
- new_item = new_item.replace("k.bias", "key.bias")
-
- new_item = new_item.replace("v.weight", "value.weight")
- new_item = new_item.replace("v.bias", "value.bias")
-
- new_item = new_item.replace("proj_out.weight", "proj_attn.weight")
- new_item = new_item.replace("proj_out.bias", "proj_attn.bias")
-
- new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments)
-
- mapping.append({"old": old_item, "new": new_item})
-
- return mapping
-
-
-def assign_to_checkpoint(
- paths, checkpoint, old_checkpoint, attention_paths_to_split=None, additional_replacements=None, config=None
-):
- """
- This does the final conversion step: take locally converted weights and apply a global renaming
- to them. It splits attention layers, and takes into account additional replacements
- that may arise.
-
- Assigns the weights to the new checkpoint.
- """
- assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys."
-
- # Splits the attention layers into three variables.
- if attention_paths_to_split is not None:
- for path, path_map in attention_paths_to_split.items():
- old_tensor = old_checkpoint[path]
- channels = old_tensor.shape[0] // 3
-
- target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1)
-
- num_heads = old_tensor.shape[0] // config["num_head_channels"] // 3
-
- old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:])
- query, key, value = old_tensor.split(channels // num_heads, dim=1)
-
- checkpoint[path_map["query"]] = query.reshape(target_shape)
- checkpoint[path_map["key"]] = key.reshape(target_shape)
- checkpoint[path_map["value"]] = value.reshape(target_shape)
-
- for path in paths:
- new_path = path["new"]
-
- # These have already been assigned
- if attention_paths_to_split is not None and new_path in attention_paths_to_split:
- continue
-
- # Global renaming happens here
- new_path = new_path.replace("middle_block.0", "mid_block.resnets.0")
- new_path = new_path.replace("middle_block.1", "mid_block.attentions.0")
- new_path = new_path.replace("middle_block.2", "mid_block.resnets.1")
-
- if additional_replacements is not None:
- for replacement in additional_replacements:
- new_path = new_path.replace(replacement["old"], replacement["new"])
-
- # proj_attn.weight has to be converted from conv 1D to linear
- if "proj_attn.weight" in new_path:
- checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0]
- else:
- checkpoint[new_path] = old_checkpoint[path["old"]]
-
-
-def conv_attn_to_linear(checkpoint):
- keys = list(checkpoint.keys())
- attn_keys = ["query.weight", "key.weight", "value.weight"]
- for key in keys:
- if ".".join(key.split(".")[-2:]) in attn_keys:
- if checkpoint[key].ndim > 2:
- checkpoint[key] = checkpoint[key][:, :, 0, 0]
- elif "proj_attn.weight" in key:
- if checkpoint[key].ndim > 2:
- checkpoint[key] = checkpoint[key][:, :, 0]
-
-
-def create_unet_diffusers_config(original_config):
- """
- Creates a config for the diffusers based on the config of the LDM model.
- """
- unet_params = original_config.model.params.unet_config.params
-
- block_out_channels = [unet_params.model_channels * mult for mult in unet_params.channel_mult]
-
- down_block_types = []
- resolution = 1
- for i in range(len(block_out_channels)):
- block_type = "CrossAttnDownBlock2D" if resolution in unet_params.attention_resolutions else "DownBlock2D"
- down_block_types.append(block_type)
- if i != len(block_out_channels) - 1:
- resolution *= 2
-
- up_block_types = []
- for i in range(len(block_out_channels)):
- block_type = "CrossAttnUpBlock2D" if resolution in unet_params.attention_resolutions else "UpBlock2D"
- up_block_types.append(block_type)
- resolution //= 2
-
- config = dict(
- sample_size=unet_params.image_size,
- in_channels=unet_params.in_channels,
- out_channels=unet_params.out_channels,
- down_block_types=tuple(down_block_types),
- up_block_types=tuple(up_block_types),
- block_out_channels=tuple(block_out_channels),
- layers_per_block=unet_params.num_res_blocks,
- cross_attention_dim=unet_params.context_dim,
- attention_head_dim=unet_params.num_heads,
- )
-
- return config
-
-
-def create_vae_diffusers_config(original_config):
- """
- Creates a config for the diffusers based on the config of the LDM model.
- """
- vae_params = original_config.model.params.first_stage_config.params.ddconfig
- _ = original_config.model.params.first_stage_config.params.embed_dim
-
- block_out_channels = [vae_params.ch * mult for mult in vae_params.ch_mult]
- down_block_types = ["DownEncoderBlock2D"] * len(block_out_channels)
- up_block_types = ["UpDecoderBlock2D"] * len(block_out_channels)
-
- config = dict(
- sample_size=vae_params.resolution,
- in_channels=vae_params.in_channels,
- out_channels=vae_params.out_ch,
- down_block_types=tuple(down_block_types),
- up_block_types=tuple(up_block_types),
- block_out_channels=tuple(block_out_channels),
- latent_channels=vae_params.z_channels,
- layers_per_block=vae_params.num_res_blocks,
- )
- return config
-
-
-def create_diffusers_schedular(original_config):
- schedular = DDIMScheduler(
- num_train_timesteps=original_config.model.params.timesteps,
- beta_start=original_config.model.params.linear_start,
- beta_end=original_config.model.params.linear_end,
- beta_schedule="scaled_linear",
- )
- return schedular
-
-
-def create_ldm_bert_config(original_config):
- bert_params = original_config.model.parms.cond_stage_config.params
- config = LDMBertConfig(
- d_model=bert_params.n_embed,
- encoder_layers=bert_params.n_layer,
- encoder_ffn_dim=bert_params.n_embed * 4,
- )
- return config
-
-
-def convert_ldm_unet_checkpoint(checkpoint, config):
- """
- Takes a state dict and a config, and returns a converted checkpoint.
- """
-
- # extract state_dict for UNet
- unet_state_dict = {}
- unet_key = "model.diffusion_model."
- keys = list(checkpoint.keys())
- for key in keys:
- if key.startswith(unet_key):
- unet_state_dict[key.replace(unet_key, "")] = checkpoint.pop(key)
-
- new_checkpoint = {}
-
- new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
- new_checkpoint["time_embedding.linear_1.bias"] = unet_state_dict["time_embed.0.bias"]
- new_checkpoint["time_embedding.linear_2.weight"] = unet_state_dict["time_embed.2.weight"]
- new_checkpoint["time_embedding.linear_2.bias"] = unet_state_dict["time_embed.2.bias"]
-
- new_checkpoint["conv_in.weight"] = unet_state_dict["input_blocks.0.0.weight"]
- new_checkpoint["conv_in.bias"] = unet_state_dict["input_blocks.0.0.bias"]
-
- new_checkpoint["conv_norm_out.weight"] = unet_state_dict["out.0.weight"]
- new_checkpoint["conv_norm_out.bias"] = unet_state_dict["out.0.bias"]
- new_checkpoint["conv_out.weight"] = unet_state_dict["out.2.weight"]
- new_checkpoint["conv_out.bias"] = unet_state_dict["out.2.bias"]
-
- # Retrieves the keys for the input blocks only
- num_input_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "input_blocks" in layer})
- input_blocks = {
- layer_id: [key for key in unet_state_dict if f"input_blocks.{layer_id}" in key]
- for layer_id in range(num_input_blocks)
- }
-
- # Retrieves the keys for the middle blocks only
- num_middle_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "middle_block" in layer})
- middle_blocks = {
- layer_id: [key for key in unet_state_dict if f"middle_block.{layer_id}" in key]
- for layer_id in range(num_middle_blocks)
- }
-
- # Retrieves the keys for the output blocks only
- num_output_blocks = len({".".join(layer.split(".")[:2]) for layer in unet_state_dict if "output_blocks" in layer})
- output_blocks = {
- layer_id: [key for key in unet_state_dict if f"output_blocks.{layer_id}" in key]
- for layer_id in range(num_output_blocks)
- }
-
- for i in range(1, num_input_blocks):
- block_id = (i - 1) // (config["layers_per_block"] + 1)
- layer_in_block_id = (i - 1) % (config["layers_per_block"] + 1)
-
- resnets = [
- key for key in input_blocks[i] if f"input_blocks.{i}.0" in key and f"input_blocks.{i}.0.op" not in key
- ]
- attentions = [key for key in input_blocks[i] if f"input_blocks.{i}.1" in key]
-
- if f"input_blocks.{i}.0.op.weight" in unet_state_dict:
- new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.weight"] = unet_state_dict.pop(
- f"input_blocks.{i}.0.op.weight"
- )
- new_checkpoint[f"down_blocks.{block_id}.downsamplers.0.conv.bias"] = unet_state_dict.pop(
- f"input_blocks.{i}.0.op.bias"
- )
-
- paths = renew_resnet_paths(resnets)
- meta_path = {"old": f"input_blocks.{i}.0", "new": f"down_blocks.{block_id}.resnets.{layer_in_block_id}"}
- assign_to_checkpoint(
- paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config
- )
-
- if len(attentions):
- paths = renew_attention_paths(attentions)
- meta_path = {"old": f"input_blocks.{i}.1", "new": f"down_blocks.{block_id}.attentions.{layer_in_block_id}"}
- assign_to_checkpoint(
- paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config
- )
-
- resnet_0 = middle_blocks[0]
- attentions = middle_blocks[1]
- resnet_1 = middle_blocks[2]
-
- resnet_0_paths = renew_resnet_paths(resnet_0)
- assign_to_checkpoint(resnet_0_paths, new_checkpoint, unet_state_dict, config=config)
-
- resnet_1_paths = renew_resnet_paths(resnet_1)
- assign_to_checkpoint(resnet_1_paths, new_checkpoint, unet_state_dict, config=config)
-
- attentions_paths = renew_attention_paths(attentions)
- meta_path = {"old": "middle_block.1", "new": "mid_block.attentions.0"}
- assign_to_checkpoint(
- attentions_paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config
- )
-
- for i in range(num_output_blocks):
- block_id = i // (config["layers_per_block"] + 1)
- layer_in_block_id = i % (config["layers_per_block"] + 1)
- output_block_layers = [shave_segments(name, 2) for name in output_blocks[i]]
- output_block_list = {}
-
- for layer in output_block_layers:
- layer_id, layer_name = layer.split(".")[0], shave_segments(layer, 1)
- if layer_id in output_block_list:
- output_block_list[layer_id].append(layer_name)
- else:
- output_block_list[layer_id] = [layer_name]
-
- if len(output_block_list) > 1:
- resnets = [key for key in output_blocks[i] if f"output_blocks.{i}.0" in key]
- attentions = [key for key in output_blocks[i] if f"output_blocks.{i}.1" in key]
-
- resnet_0_paths = renew_resnet_paths(resnets)
- paths = renew_resnet_paths(resnets)
-
- meta_path = {"old": f"output_blocks.{i}.0", "new": f"up_blocks.{block_id}.resnets.{layer_in_block_id}"}
- assign_to_checkpoint(
- paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config
- )
-
- if ["conv.weight", "conv.bias"] in output_block_list.values():
- index = list(output_block_list.values()).index(["conv.weight", "conv.bias"])
- new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.weight"] = unet_state_dict[
- f"output_blocks.{i}.{index}.conv.weight"
- ]
- new_checkpoint[f"up_blocks.{block_id}.upsamplers.0.conv.bias"] = unet_state_dict[
- f"output_blocks.{i}.{index}.conv.bias"
- ]
-
- # Clear attentions as they have been attributed above.
- if len(attentions) == 2:
- attentions = []
-
- if len(attentions):
- paths = renew_attention_paths(attentions)
- meta_path = {
- "old": f"output_blocks.{i}.1",
- "new": f"up_blocks.{block_id}.attentions.{layer_in_block_id}",
- }
- assign_to_checkpoint(
- paths, new_checkpoint, unet_state_dict, additional_replacements=[meta_path], config=config
- )
- else:
- resnet_0_paths = renew_resnet_paths(output_block_layers, n_shave_prefix_segments=1)
- for path in resnet_0_paths:
- old_path = ".".join(["output_blocks", str(i), path["old"]])
- new_path = ".".join(["up_blocks", str(block_id), "resnets", str(layer_in_block_id), path["new"]])
-
- new_checkpoint[new_path] = unet_state_dict[old_path]
-
- return new_checkpoint
-
-
-def convert_ldm_vae_checkpoint(checkpoint, config):
- # extract state dict for VAE
- vae_state_dict = {}
- vae_key = "first_stage_model."
- keys = list(checkpoint.keys())
- for key in keys:
- if key.startswith(vae_key):
- vae_state_dict[key.replace(vae_key, "")] = checkpoint.get(key)
-
- new_checkpoint = {}
-
- new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"]
- new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"]
- new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"]
- new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"]
- new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"]
- new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"]
-
- new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"]
- new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"]
- new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"]
- new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"]
- new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"]
- new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"]
-
- new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"]
- new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"]
- new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"]
- new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"]
-
- # Retrieves the keys for the encoder down blocks only
- num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer})
- down_blocks = {
- layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks)
- }
-
- # Retrieves the keys for the decoder up blocks only
- num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer})
- up_blocks = {
- layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks)
- }
-
- for i in range(num_down_blocks):
- resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key]
-
- if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict:
- new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop(
- f"encoder.down.{i}.downsample.conv.weight"
- )
- new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop(
- f"encoder.down.{i}.downsample.conv.bias"
- )
-
- paths = renew_vae_resnet_paths(resnets)
- meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
-
- mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key]
- num_mid_res_blocks = 2
- for i in range(1, num_mid_res_blocks + 1):
- resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key]
-
- paths = renew_vae_resnet_paths(resnets)
- meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
-
- mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key]
- paths = renew_vae_attention_paths(mid_attentions)
- meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
- conv_attn_to_linear(new_checkpoint)
-
- for i in range(num_up_blocks):
- block_id = num_up_blocks - 1 - i
- resnets = [
- key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key
- ]
-
- if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict:
- new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[
- f"decoder.up.{block_id}.upsample.conv.weight"
- ]
- new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[
- f"decoder.up.{block_id}.upsample.conv.bias"
- ]
-
- paths = renew_vae_resnet_paths(resnets)
- meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
-
- mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key]
- num_mid_res_blocks = 2
- for i in range(1, num_mid_res_blocks + 1):
- resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key]
-
- paths = renew_vae_resnet_paths(resnets)
- meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
-
- mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key]
- paths = renew_vae_attention_paths(mid_attentions)
- meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"}
- assign_to_checkpoint(paths, new_checkpoint, vae_state_dict, additional_replacements=[meta_path], config=config)
- conv_attn_to_linear(new_checkpoint)
- return new_checkpoint
-
-
-def convert_ldm_bert_checkpoint(checkpoint, config):
- def _copy_attn_layer(hf_attn_layer, pt_attn_layer):
- hf_attn_layer.q_proj.weight.data = pt_attn_layer.to_q.weight
- hf_attn_layer.k_proj.weight.data = pt_attn_layer.to_k.weight
- hf_attn_layer.v_proj.weight.data = pt_attn_layer.to_v.weight
-
- hf_attn_layer.out_proj.weight = pt_attn_layer.to_out.weight
- hf_attn_layer.out_proj.bias = pt_attn_layer.to_out.bias
-
- def _copy_linear(hf_linear, pt_linear):
- hf_linear.weight = pt_linear.weight
- hf_linear.bias = pt_linear.bias
-
- def _copy_layer(hf_layer, pt_layer):
- # copy layer norms
- _copy_linear(hf_layer.self_attn_layer_norm, pt_layer[0][0])
- _copy_linear(hf_layer.final_layer_norm, pt_layer[1][0])
-
- # copy attn
- _copy_attn_layer(hf_layer.self_attn, pt_layer[0][1])
-
- # copy MLP
- pt_mlp = pt_layer[1][1]
- _copy_linear(hf_layer.fc1, pt_mlp.net[0][0])
- _copy_linear(hf_layer.fc2, pt_mlp.net[2])
-
- def _copy_layers(hf_layers, pt_layers):
- for i, hf_layer in enumerate(hf_layers):
- if i != 0:
- i += i
- pt_layer = pt_layers[i : i + 2]
- _copy_layer(hf_layer, pt_layer)
-
- hf_model = LDMBertModel(config).eval()
-
- # copy embeds
- hf_model.model.embed_tokens.weight = checkpoint.transformer.token_emb.weight
- hf_model.model.embed_positions.weight.data = checkpoint.transformer.pos_emb.emb.weight
-
- # copy layer norm
- _copy_linear(hf_model.model.layer_norm, checkpoint.transformer.norm)
-
- # copy hidden layers
- _copy_layers(hf_model.model.layers, checkpoint.transformer.attn_layers.layers)
-
- _copy_linear(hf_model.to_logits, checkpoint.transformer.to_logits)
-
- return hf_model
-
-
-def convert_ldm_clip_checkpoint(checkpoint):
- text_model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
-
- keys = list(checkpoint.keys())
-
- text_model_dict = {}
-
- for key in keys:
- if key.startswith("cond_stage_model.transformer"):
- text_model_dict[key[len("cond_stage_model.transformer.") :]] = checkpoint[key]
-
- text_model.load_state_dict(text_model_dict)
-
- return text_model
-
-import os
-def convert_checkpoint(checkpoint_path, inpainting=False):
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--checkpoint_path", default=checkpoint_path, type=str, help="Path to the checkpoint to convert."
- )
- # !wget https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml
- parser.add_argument(
- "--original_config_file",
- default=None,
- type=str,
- help="The YAML config file corresponding to the original architecture.",
- )
- parser.add_argument(
- "--scheduler_type",
- default="pndm",
- type=str,
- help="Type of scheduler to use. Should be one of ['pndm', 'lms', 'ddim']",
- )
- parser.add_argument("--dump_path", default=None, type=str, help="Path to the output model.")
-
- args = parser.parse_args([])
- if args.original_config_file is None:
- if inpainting:
- args.original_config_file = "./models/v1-inpainting-inference.yaml"
- else:
- args.original_config_file = "./models/v1-inference.yaml"
-
- original_config = OmegaConf.load(args.original_config_file)
- checkpoint = torch.load(args.checkpoint_path)["state_dict"]
-
- num_train_timesteps = original_config.model.params.timesteps
- beta_start = original_config.model.params.linear_start
- beta_end = original_config.model.params.linear_end
- if args.scheduler_type == "pndm":
- scheduler = PNDMScheduler(
- beta_end=beta_end,
- beta_schedule="scaled_linear",
- beta_start=beta_start,
- num_train_timesteps=num_train_timesteps,
- skip_prk_steps=True,
- )
- elif args.scheduler_type == "lms":
- scheduler = LMSDiscreteScheduler(beta_start=beta_start, beta_end=beta_end, beta_schedule="scaled_linear")
- elif args.scheduler_type == "ddim":
- scheduler = DDIMScheduler(
- beta_start=beta_start,
- beta_end=beta_end,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- else:
- raise ValueError(f"Scheduler of type {args.scheduler_type} doesn't exist!")
-
- # Convert the UNet2DConditionModel model.
- unet_config = create_unet_diffusers_config(original_config)
- converted_unet_checkpoint = convert_ldm_unet_checkpoint(checkpoint, unet_config)
-
- unet = UNet2DConditionModel(**unet_config)
- unet.load_state_dict(converted_unet_checkpoint)
-
- # Convert the VAE model.
- vae_config = create_vae_diffusers_config(original_config)
- converted_vae_checkpoint = convert_ldm_vae_checkpoint(checkpoint, vae_config)
-
- vae = AutoencoderKL(**vae_config)
- vae.load_state_dict(converted_vae_checkpoint)
-
- # Convert the text model.
- text_model_type = original_config.model.params.cond_stage_config.target.split(".")[-1]
- if text_model_type == "FrozenCLIPEmbedder":
- text_model = convert_ldm_clip_checkpoint(checkpoint)
- tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
- safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker")
- feature_extractor = AutoFeatureExtractor.from_pretrained("CompVis/stable-diffusion-safety-checker")
- pipe = StableDiffusionPipeline(
- vae=vae,
- text_encoder=text_model,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- )
- else:
- text_config = create_ldm_bert_config(original_config)
- text_model = convert_ldm_bert_checkpoint(checkpoint, text_config)
- tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
- pipe = LDMTextToImagePipeline(vqvae=vae, bert=text_model, tokenizer=tokenizer, unet=unet, scheduler=scheduler)
-
- return pipe
diff --git a/spaces/billusanda007/HireGPT/app_pdf_version.py b/spaces/billusanda007/HireGPT/app_pdf_version.py
deleted file mode 100644
index e3a576d4518a6332fdfab500ac61b52cc53f28f4..0000000000000000000000000000000000000000
--- a/spaces/billusanda007/HireGPT/app_pdf_version.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import streamlit as st
-import nltk
-from nltk.corpus import stopwords
-from nltk.tokenize import word_tokenize
-from nltk.stem import PorterStemmer
-from sklearn.feature_extraction.text import TfidfVectorizer
-from sklearn.metrics.pairwise import cosine_similarity
-from PyPDF2 import PdfReader
-import os
-from io import BytesIO
-import pickle
-import pdfminer
-from pdfminer.high_level import extract_text
-import re
-
-nltk.download('punkt')
-nltk.download('stopwords')
-
-def preprocess_text(text):
- words = word_tokenize(text.lower())
-
- stop_words = set(stopwords.words('english'))
- words = [word for word in words if word not in stop_words]
-
- stemmer = PorterStemmer()
- words = [stemmer.stem(word) for word in words]
-
- return ' '.join(words)
-
-def extract_text_from_pdf(pdf_content):
- pdf_reader = PdfReader(BytesIO(pdf_content))
- text = ''
- for page in pdf_reader.pages:
- text += page.extract_text()
- return text
-
-def clean_pdf_text(text):
- # Your existing cleanResume function remains unchanged
- text = re.sub('http\S+\s*', ' ', text)
- text = re.sub('RT|cc', ' ', text)
- text = re.sub('#\S+', '', text)
- text = re.sub('@\S+', ' ', text)
- text = re.sub('[%s]' % re.escape("""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""), ' ', text)
- text = re.sub(r'[^\x00-\x7f]',r' ', text)
- text = re.sub('\s+', ' ', text)
- return text
-
-def extract_candidate_name(text):
- # Use regular expressions to extract candidate names
- # Modify the regex pattern according to your naming conventions
- pattern = r'(?:Mr\.|Ms\.|Mrs\.)?\s?([A-Z][a-z]+)\s([A-Z][a-z]+)'
- match = re.search(pattern, text)
- if match:
- return match.group(0)
- return "Candidate Name Not Found"
-
-def calculate_similarity(job_description, cvs, cv_file_names):
- processed_job_desc = preprocess_text(job_description)
-
- processed_cvs = [preprocess_text(cv) for cv in cvs]
-
- all_text = [processed_job_desc] + processed_cvs
-
- vectorizer = TfidfVectorizer()
- tfidf_matrix = vectorizer.fit_transform(all_text)
-
- similarity_scores = cosine_similarity(tfidf_matrix)[0][1:]
-
- ranked_cvs = list(zip(cv_file_names, similarity_scores))
- ranked_cvs.sort(key=lambda x: x[1], reverse=True)
-
- return ranked_cvs
-
-def rank_and_shortlist(job_description, cv_files, threshold=0.15):
- cv_texts = [extract_text_from_pdf(cv_file.read()) for cv_file in cv_files]
- cv_file_names = [cv_file.name for cv_file in cv_files]
- cvs = [clean_pdf_text(cv_text) for cv_text in cv_texts]
- similarity_scores = calculate_similarity(job_description, cvs, cv_file_names)
-
- ranked_cvs = [(cv_name, score) for (cv_name, score) in similarity_scores]
- shortlisted_cvs = [(cv_name, score) for (cv_name, score) in ranked_cvs if score > threshold]
-
- return ranked_cvs, shortlisted_cvs
-
-def main():
- st.title("Resume Ranking App")
-
- st.write("Upload the Job Description:")
- job_description = st.text_area("Job Description", height=200, key='job_description')
-
- st.write("Upload the Resumes (PDFs):")
- cv_files = st.file_uploader("Choose PDF files", accept_multiple_files=True, type=["pdf"], key='cv_files')
-
- if st.button("Submit"):
- if job_description and cv_files:
- # Rank and shortlist candidates
- ranked_cvs, shortlisted_cvs = rank_and_shortlist(job_description, cv_files)
-
- # Display ranking with larger text
- st.markdown("### Ranking of Resumes:")
- for rank, score in ranked_cvs:
- st.markdown(f"**File Name:** {rank}, **Similarity Score:** {score:.2f}")
-
- # Display shortlisted candidates with larger text
- st.markdown("### Shortlisted Candidates:")
- if not shortlisted_cvs: # Check if the shortlisted_cvs list is empty
- st.markdown("None")
- else:
- for rank, score in shortlisted_cvs:
- st.markdown(f"**File Name:** {rank}, **Similarity Score:** {score:.2f}")
- else:
- st.write("Please upload both the job description and resumes to proceed.")
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/ Ready - 1966 (1999) Losslessgolkes 1.md b/spaces/bioriAsaeru/text-to-voice/ Ready - 1966 (1999) Losslessgolkes 1.md
deleted file mode 100644
index 1dc6f29a8e1979ab31995af84d070c668d420a71..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/ Ready - 1966 (1999) Losslessgolkes 1.md
+++ /dev/null
@@ -1,6 +0,0 @@
-' Ready - 1966 (1999) Losslessgolkes 1 Download File ————— https://urloso.com/2uyQcb
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Advanced Web Attacks And Exploitation !LINK! Downloadl.md b/spaces/bioriAsaeru/text-to-voice/Advanced Web Attacks And Exploitation !LINK! Downloadl.md
deleted file mode 100644
index 9add7f7bd396c8da0006f7771ecdaa467c26c58a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Advanced Web Attacks And Exploitation !LINK! Downloadl.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Advanced Web Attacks and exploitation (WEB-300) is an advanced web application security course that teaches the skills needed to conduct white box web app penetration tests. Students who complete the course and pass the exam earn the Offensive Security Web Expert (OSWE) certification and will demonstrate mastery in exploiting front-facing web apps. The OSWE is one of three certifications making up the OSCE3 certification along with the OSEP for advanced pentesting and OSED for exploit development.
-Much like our popular Advanced Infrastructure Hacking class, this class talks about a wealth of hacking techniques to compromise web applications, APIs, cloud components and other associated end-points. This class focuses on specific areas of appsec and on advanced vulnerability identification and exploitation techniques (especially server side flaws). The class allows attendees to practice some neat, new and ridiculous hacks which affected real life products and have found a mention in real bug-bounty programs. The vulnerabilities selected for the class either typically go undetected by modern scanners or the exploitation techniques are not so well known.
-Advanced Web Attacks And Exploitation Downloadl DOWNLOAD ››››› https://urloso.com/2uyPq5
-Advanced Web Hacking course talks about a wealth of hacking techniques to compromise web applications, APIs and associated end-points. This course focuses on specific areas of app-sec and on advanced vulnerability identification and exploitation techniques (especially server side flaws). This hands-on course covers neat, new and ridiculous hacks which affected real life products and have found a mention in real bug-bounty programs. In this course vulnerabilities selected are ones that typically go undetected by modern scanners or the exploitation techniques are not so well known.
-Advanced Security expands the capabilities of the cybersecurity solution with URL filtering and exploit prevention to counter more threats such web-based attacks and exploitation attempts. It also increases the speed and accuracy of the detection rate for known malware with an enhanced virus signature database. The add-on package allows for more aggressive malware scans of backed up data in the Acronis Cloud, preventing threat recurrence.
-Students are expected to know how to use Burp Suite and have a basic understanding of common web attacks as well as perform basic scripting using common languages such as python, PHP and JavaScript. Each of the vulnerabilities presented have either been mirrored from real zero-day or are n-day bugs that have been discovered by the author with a focus on not just exploitation, but also on the discovery.
-This alert provides information on exploitation by cybercriminal and advanced persistent threat (APT) groups of the current coronavirus disease 2019 (COVID-19) global pandemic. It includes a non-exhaustive list of indicators of compromise (IOCs) for detection as well as mitigation advice.
-The growing threat that advanced cybersecurity attacks pose to the world was highlighted by the Colonial Pipeline attack in May 2021. The fuel pipeline operator suffered a ransomware attack launched by the DarkSide hacking group, which led to fuel disruption and mass panic buying across the U.S.
-Mobile devices run specialized operating systems with security problems. Students will learn how mobile operating systems and apps work, how to find and exploit vulnerabilities and how to defend them. Topics will include phone call, voicemail, SMS intrusion, jailbreaking, rooting, NFC attacks, mal ware, browser exploitation, and application vulnerabilities.
-
-The cyber kill chain is a series of steps that trace stages of a cyberattack from the early reconnaissance stages to the exfiltration of data. The kill chain helps us understand and combat ransomware, security breaches, and advanced persistent attacks (APTs).
-As cyberattacks grow in both number and sophistication, organizations are increasingly under the gun to protect themselves from compromise. Though companies have responded by upping their security budgets and adopting more advanced defenses, keeping up with the threats that will surface over the next few years will be a challenge.
-The advanced graphical interface of Exploit Pack makes it easy to use and supports rapid reconfiguration to adapt exploit codes, post-exploitation modules and utilities to the constantly evolving threats. Advanced technical trainings We help you and your team to unlock advanced security skills, learn new techniques, exploit development, reverse engineering and attack simulations by giving monthly online live trainings, available for free to all our Exploit Pack users.
-First of all, they need to understand the most significant threat vectors, allowing them to prioritize cybersecurity initiatives with the highest return on investment and create a successful cybersecurity plan. Ransomware, phishing, web application and vulnerability exploitation attacks, denial of service (DoS) attacks, insider threats, and attack campaigns of the nation-state and state-sponsored threat actors and Advanced Persistent Threat (APT) groups are the most prevalent threats that financial institutions face in 2022.
-Organized cybercriminal groups collaborate and share attack tactics, techniques, procedures (TTPs), tools, and resources to compromise financial institutions, resulting in an increase in cyberattacks. Moreover, nation-state attack campaigns reflect global geopolitical tensions, which have fueled a growth in cyber activity targeting governments, militaries, and the business sector, according to the Navigating Cyber 2022 report of the Financial Services Information Sharing and Analysis Center (FS-ISAC) [24]. For example, the war in Ukraine, ongoing protest activity in Hong Kong, and North Korea's continued missile launches could result in cyber activity against various targets in the US, the UK, and the EU, among other places. Retaliation may take the form of denial of service (DoS) attacks, spearphishing, destructive malware, or vulnerability exploitation attacks.
-You can't stop either of the above attacks with antivirus, however antivirus is still very important to have installed regardless. There is one technique (available via at least one free tool that I know about and one commercial product) that provides near-complete control over infection. This technique is exploit protection in the form of advanced canaries and ASLR such as provided by Microsoft EMET or Invincea Freespace. Many in the security industry will claim that these can be bypassed -- and while there is truth to this, it often requires knowledge of the target environment that goes beyond what exploit kits currently allow.
-During the period of investigation, Mandiant found that APT41 successfully compromised at least six US state government networks through the exploitation of vulnerable internet-facing web applications, often written in ASP .NET. In most of the compromises, APT41 carried out .NET deserialization attacks, although Mandiant also observed the group exploiting SQL injection and directory traversal vulnerabilities.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/CRACK LabelJoy 7.0.0.611 Server [Multilingual !EXCLUSIVE!.md b/spaces/bioriAsaeru/text-to-voice/CRACK LabelJoy 7.0.0.611 Server [Multilingual !EXCLUSIVE!.md
deleted file mode 100644
index 42380156b20b86a4eef27a44f133cb3d77f6189a..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/CRACK LabelJoy 7.0.0.611 Server [Multilingual !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-CRACK LabelJoy 7.0.0.611 Server [Multilingual Download File · https://urloso.com/2uyRqs
-
-LabelJoy 7.0.0.611 Server [Multilingual utorrent ... Adobe Photoshop CC 2015 (20150529.r.88) (32 64Bit) Crack Utorrent · Lassoing The Moon ... 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Championship Manager 2008 V1 0 No DVDFixed EXE.md b/spaces/bioriAsaeru/text-to-voice/Championship Manager 2008 V1 0 No DVDFixed EXE.md
deleted file mode 100644
index c584d1d8226379e9710cb39a25862dd1e5033d8c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Championship Manager 2008 V1 0 No DVDFixed EXE.md
+++ /dev/null
@@ -1,36 +0,0 @@
-Championship Manager 2008 V1 0 No DVDFixed EXE Download ⇒⇒⇒ https://urloso.com/2uyQsW
-
-Comments for championship-manager-2008-v10-english-no-dvdfixed-exe.rar
-
-I'm trying to install this manager, but i can't do it, it always says:
-
-"Couldn't open the file, is it compressed? Uncompress it and try again." (don't see no ".rar" extension), then after re-read the instructions in the readme.txt I don't know what to do.
-
-Can anyone help me please? Thanks.
-
-wirdeland - 2013.07.29 19:22
-
-In my case, it always said: "File error: The system cannot find the file specified". I have a copy of the manager from a DVD in my possession, and I can find my C:\Program Files (x86)\Championship Manager 08\ directory, but the program is not there.
-
-I have a full version. On my PC, the directory is under C:\Program Files (x86)\Championship Manager 08.
-
-Thiago - 2013.07.29 19:46
-
-hi wirdeland,
-
-What a strange problem. Do you have another copy of Championship Manager 2008 with the same problem? There is a version for Mac OSX that I have, but it is far from perfect (examples: some game players is buggy), so maybe the problem is with your Windows installation or configuration.
-
-I have the same issue, and it's very strange since the files are located in C:\Program Files (x86)\Championship Manager 08\ directory. And again, I have full version.
-
-Here is an example from the readme:
-
-* Add-Ons: Add-ons are installed to the directory containing the add-on file, and are added to the game as follows. A directory with the same name as the add-on is created in the /Add-Ons/ directory, and the game loads the directory specified by the game's add-on type. A complete list of the contents of an add-on can be viewed by opening the game's Add-Ons dialog.
-
-Can you say why I can't find my "Add-Ons" directory?
-
-wirdeland - 2013.07.30 14:56
-
-Thiago, I have Windows XP and I'm able to find the Add-Ons directory (I don't think that it's the problem, but anyway 4fefd39f24
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Download Film Indonesia 3 Hari Untuk Selamanya 23.md b/spaces/bioriAsaeru/text-to-voice/Download Film Indonesia 3 Hari Untuk Selamanya 23.md
deleted file mode 100644
index fe0f8a11cd0d0eeb4805fa3159c8a1ea9d6cae2d..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download Film Indonesia 3 Hari Untuk Selamanya 23.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-banyak orang yang memakai cell phone untuk film download karena waktu sampai moda. karena itu mereka mengagetkan jadi dia pakai apa yang ada di rumah di sana. Mereka mendapatkan video yang ada di tempat mereka di sekolah. Bukan itu penggelikan. Mereka masih belajar berita tentang sesuatu yang sedikit lebih menarik. Mereka menjadi pengunjung sekolah-sekolah web bangsa yang mencoba film download di internet. Mereka itu dalam waktu kurang dari 3 tahun semua didominasi oleh komputer. Mereka mulai mengerti film sektor TV dan mengerti bahwa orang sekolah bisa download film di youtube. Mereka mengerti film yang sekolah mereka sedang menonton, di film konfirmasi, dan juga di pengantin film. Keseluruhan orang menghabiskan ratusan di komputer yang Anda dapat di tempatnya. Sehingga mereka juga meragukan apa yang ada di rumah bimbingan mereka.
-download film indonesia 3 hari untuk selamanya 23 Download Zip ⚡ https://urloso.com/2uyS1c
-kalau ini bukan yang satu ini makanya lagi tahun ini. film juga banyak dilengkapi dengan video yang bagus. dengan cara seperti yang awak dapat kumpulkan file gratis. saat nama awak dipercayai film apa yang awak curi. itu menjadi satu hal yang awak dapat untuk bertukar dengan seseorang yang telah mendirikan website. Anda mungkin pernah memainkan games. game ini adalah game yang digunakan untuk mengukur penggunaan salah satunya di perangkat android. melalui aplikasi game di android terdapat baik game lain yang digunakan untuk membantu mengisolasi yang tergolong sektor kecil. juga upaya normal untuk bisa mendapatkan aplikasi yang di sedikitnya komputer desktop dengan menggunakan browser. Hal ini adalah beberapa hal yang bisa Anda lakukan dengan minimal efekt.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Free Download Malayalam Movie 2 Raja Harishchandra A Masterpiece of Malayalam Cinema.md b/spaces/bioriAsaeru/text-to-voice/Free Download Malayalam Movie 2 Raja Harishchandra A Masterpiece of Malayalam Cinema.md
deleted file mode 100644
index c21bae95a953791f7414a9b2ac91e7a9b83c51f8..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Free Download Malayalam Movie 2 Raja Harishchandra A Masterpiece of Malayalam Cinema.md
+++ /dev/null
@@ -1,6 +0,0 @@
-free download malayalam movie 2 Raja Harishchandra Download File ——— https://urloso.com/2uyOWQ
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/model/text_encoder.py b/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/model/text_encoder.py
deleted file mode 100644
index 222f46162d2c460dfb177d456ec0991782365e42..0000000000000000000000000000000000000000
--- a/spaces/bookbot/Grad-TTS-Weildan-Playground/Grad-TTS/model/text_encoder.py
+++ /dev/null
@@ -1,326 +0,0 @@
-""" from https://github.com/jaywalnut310/glow-tts """
-
-import math
-
-import torch
-
-from model.base import BaseModule
-from model.utils import sequence_mask, convert_pad_shape
-
-
-class LayerNorm(BaseModule):
- def __init__(self, channels, eps=1e-4):
- super(LayerNorm, self).__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = torch.nn.Parameter(torch.ones(channels))
- self.beta = torch.nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- n_dims = len(x.shape)
- mean = torch.mean(x, 1, keepdim=True)
- variance = torch.mean((x - mean)**2, 1, keepdim=True)
-
- x = (x - mean) * torch.rsqrt(variance + self.eps)
-
- shape = [1, -1] + [1] * (n_dims - 2)
- x = x * self.gamma.view(*shape) + self.beta.view(*shape)
- return x
-
-
-class ConvReluNorm(BaseModule):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size,
- n_layers, p_dropout):
- super(ConvReluNorm, self).__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.conv_layers = torch.nn.ModuleList()
- self.norm_layers = torch.nn.ModuleList()
- self.conv_layers.append(torch.nn.Conv1d(in_channels, hidden_channels,
- kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = torch.nn.Sequential(torch.nn.ReLU(), torch.nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(torch.nn.Conv1d(hidden_channels, hidden_channels,
- kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = torch.nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DurationPredictor(BaseModule):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout):
- super(DurationPredictor, self).__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.p_dropout = p_dropout
-
- self.drop = torch.nn.Dropout(p_dropout)
- self.conv_1 = torch.nn.Conv1d(in_channels, filter_channels,
- kernel_size, padding=kernel_size//2)
- self.norm_1 = LayerNorm(filter_channels)
- self.conv_2 = torch.nn.Conv1d(filter_channels, filter_channels,
- kernel_size, padding=kernel_size//2)
- self.norm_2 = LayerNorm(filter_channels)
- self.proj = torch.nn.Conv1d(filter_channels, 1, 1)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class MultiHeadAttention(BaseModule):
- def __init__(self, channels, out_channels, n_heads, window_size=None,
- heads_share=True, p_dropout=0.0, proximal_bias=False,
- proximal_init=False):
- super(MultiHeadAttention, self).__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.window_size = window_size
- self.heads_share = heads_share
- self.proximal_bias = proximal_bias
- self.p_dropout = p_dropout
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = torch.nn.Conv1d(channels, channels, 1)
- self.conv_k = torch.nn.Conv1d(channels, channels, 1)
- self.conv_v = torch.nn.Conv1d(channels, channels, 1)
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = torch.nn.Parameter(torch.randn(n_heads_rel,
- window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = torch.nn.Parameter(torch.randn(n_heads_rel,
- window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.conv_o = torch.nn.Conv1d(channels, out_channels, 1)
- self.drop = torch.nn.Dropout(p_dropout)
-
- torch.nn.init.xavier_uniform_(self.conv_q.weight)
- torch.nn.init.xavier_uniform_(self.conv_k.weight)
- if proximal_init:
- self.conv_k.weight.data.copy_(self.conv_q.weight.data)
- self.conv_k.bias.data.copy_(self.conv_q.bias.data)
- torch.nn.init.xavier_uniform_(self.conv_v.weight)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.k_channels)
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query, key_relative_embeddings)
- rel_logits = self._relative_position_to_absolute_position(rel_logits)
- scores_local = rel_logits / math.sqrt(self.k_channels)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device,
- dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- p_attn = torch.nn.functional.softmax(scores, dim=-1)
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights,
- value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t)
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = torch.nn.functional.pad(
- relative_embeddings, convert_pad_shape([[0, 0],
- [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,
- slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- batch, heads, length, _ = x.size()
- x = torch.nn.functional.pad(x, convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = torch.nn.functional.pad(x_flat, convert_pad_shape([[0,0],[0,0],[0,length-1]]))
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- batch, heads, length, _ = x.size()
- x = torch.nn.functional.pad(x, convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length - 1)])
- x_flat = torch.nn.functional.pad(x_flat, convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(BaseModule):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size,
- p_dropout=0.0):
- super(FFN, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.conv_1 = torch.nn.Conv1d(in_channels, filter_channels, kernel_size,
- padding=kernel_size//2)
- self.conv_2 = torch.nn.Conv1d(filter_channels, out_channels, kernel_size,
- padding=kernel_size//2)
- self.drop = torch.nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- return x * x_mask
-
-
-class Encoder(BaseModule):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers,
- kernel_size=1, p_dropout=0.0, window_size=None, **kwargs):
- super(Encoder, self).__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = torch.nn.Dropout(p_dropout)
- self.attn_layers = torch.nn.ModuleList()
- self.norm_layers_1 = torch.nn.ModuleList()
- self.ffn_layers = torch.nn.ModuleList()
- self.norm_layers_2 = torch.nn.ModuleList()
- for _ in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels,
- n_heads, window_size=window_size, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels,
- filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- for i in range(self.n_layers):
- x = x * x_mask
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class TextEncoder(BaseModule):
- def __init__(self, n_vocab, n_feats, n_channels, filter_channels,
- filter_channels_dp, n_heads, n_layers, kernel_size,
- p_dropout, window_size=None, spk_emb_dim=64, n_spks=1):
- super(TextEncoder, self).__init__()
- self.n_vocab = n_vocab
- self.n_feats = n_feats
- self.n_channels = n_channels
- self.filter_channels = filter_channels
- self.filter_channels_dp = filter_channels_dp
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.spk_emb_dim = spk_emb_dim
- self.n_spks = n_spks
-
- self.emb = torch.nn.Embedding(n_vocab, n_channels)
- torch.nn.init.normal_(self.emb.weight, 0.0, n_channels**-0.5)
-
- self.prenet = ConvReluNorm(n_channels, n_channels, n_channels,
- kernel_size=5, n_layers=3, p_dropout=0.5)
-
- self.encoder = Encoder(n_channels + (spk_emb_dim if n_spks > 1 else 0), filter_channels, n_heads, n_layers,
- kernel_size, p_dropout, window_size=window_size)
-
- self.proj_m = torch.nn.Conv1d(n_channels + (spk_emb_dim if n_spks > 1 else 0), n_feats, 1)
- self.proj_w = DurationPredictor(n_channels + (spk_emb_dim if n_spks > 1 else 0), filter_channels_dp,
- kernel_size, p_dropout)
-
- def forward(self, x, x_lengths, spk=None):
- x = self.emb(x) * math.sqrt(self.n_channels)
- x = torch.transpose(x, 1, -1)
- x_mask = torch.unsqueeze(sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.prenet(x, x_mask)
- if self.n_spks > 1:
- x = torch.cat([x, spk.unsqueeze(-1).repeat(1, 1, x.shape[-1])], dim=1)
- x = self.encoder(x, x_mask)
- mu = self.proj_m(x) * x_mask
-
- x_dp = torch.detach(x)
- logw = self.proj_w(x_dp, x_mask)
-
- return mu, logw, x_mask
diff --git a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/non_leaking.py b/spaces/caffeinum/VToonify/vtoonify/model/stylegan/non_leaking.py
deleted file mode 100644
index d0447535fed22d3ad4ac719b2b5ac6b7c58e6435..0000000000000000000000000000000000000000
--- a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/non_leaking.py
+++ /dev/null
@@ -1,469 +0,0 @@
-import math
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-import numpy as np
-
-from model.stylegan.distributed import reduce_sum
-from model.stylegan.op import upfirdn2d
-
-
-class AdaptiveAugment:
- def __init__(self, ada_aug_target, ada_aug_len, update_every, device):
- self.ada_aug_target = ada_aug_target
- self.ada_aug_len = ada_aug_len
- self.update_every = update_every
-
- self.ada_update = 0
- self.ada_aug_buf = torch.tensor([0.0, 0.0], device=device)
- self.r_t_stat = 0
- self.ada_aug_p = 0
-
- @torch.no_grad()
- def tune(self, real_pred):
- self.ada_aug_buf += torch.tensor(
- (torch.sign(real_pred).sum().item(), real_pred.shape[0]),
- device=real_pred.device,
- )
- self.ada_update += 1
-
- if self.ada_update % self.update_every == 0:
- self.ada_aug_buf = reduce_sum(self.ada_aug_buf)
- pred_signs, n_pred = self.ada_aug_buf.tolist()
-
- self.r_t_stat = pred_signs / n_pred
-
- if self.r_t_stat > self.ada_aug_target:
- sign = 1
-
- else:
- sign = -1
-
- self.ada_aug_p += sign * n_pred / self.ada_aug_len
- self.ada_aug_p = min(1, max(0, self.ada_aug_p))
- self.ada_aug_buf.mul_(0)
- self.ada_update = 0
-
- return self.ada_aug_p
-
-
-SYM6 = (
- 0.015404109327027373,
- 0.0034907120842174702,
- -0.11799011114819057,
- -0.048311742585633,
- 0.4910559419267466,
- 0.787641141030194,
- 0.3379294217276218,
- -0.07263752278646252,
- -0.021060292512300564,
- 0.04472490177066578,
- 0.0017677118642428036,
- -0.007800708325034148,
-)
-
-
-def translate_mat(t_x, t_y, device="cpu"):
- batch = t_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y), 1)
- mat[:, :2, 2] = translate
-
- return mat
-
-
-def rotate_mat(theta, device="cpu"):
- batch = theta.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- sin_t = torch.sin(theta)
- cos_t = torch.cos(theta)
- rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2)
- mat[:, :2, :2] = rot
-
- return mat
-
-
-def scale_mat(s_x, s_y, device="cpu"):
- batch = s_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
-
- return mat
-
-
-def translate3d_mat(t_x, t_y, t_z):
- batch = t_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y, t_z), 1)
- mat[:, :3, 3] = translate
-
- return mat
-
-
-def rotate3d_mat(axis, theta):
- batch = theta.shape[0]
-
- u_x, u_y, u_z = axis
-
- eye = torch.eye(3).unsqueeze(0)
- cross = torch.tensor([(0, -u_z, u_y), (u_z, 0, -u_x), (-u_y, u_x, 0)]).unsqueeze(0)
- outer = torch.tensor(axis)
- outer = (outer.unsqueeze(1) * outer).unsqueeze(0)
-
- sin_t = torch.sin(theta).view(-1, 1, 1)
- cos_t = torch.cos(theta).view(-1, 1, 1)
-
- rot = cos_t * eye + sin_t * cross + (1 - cos_t) * outer
-
- eye_4 = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- eye_4[:, :3, :3] = rot
-
- return eye_4
-
-
-def scale3d_mat(s_x, s_y, s_z):
- batch = s_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
- mat[:, 2, 2] = s_z
-
- return mat
-
-
-def luma_flip_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- flip = 2 * torch.ger(axis, axis) * i.view(-1, 1, 1)
-
- return eye - flip
-
-
-def saturation_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- axis = torch.ger(axis, axis)
- saturate = axis + (eye - axis) * i.view(-1, 1, 1)
-
- return saturate
-
-
-def lognormal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).log_normal_(mean=mean, std=std)
-
-
-def category_sample(size, categories, device="cpu"):
- category = torch.tensor(categories, device=device)
- sample = torch.randint(high=len(categories), size=(size,), device=device)
-
- return category[sample]
-
-
-def uniform_sample(size, low, high, device="cpu"):
- return torch.empty(size, device=device).uniform_(low, high)
-
-
-def normal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).normal_(mean, std)
-
-
-def bernoulli_sample(size, p, device="cpu"):
- return torch.empty(size, device=device).bernoulli_(p)
-
-
-def random_mat_apply(p, transform, prev, eye, device="cpu"):
- size = transform.shape[0]
- select = bernoulli_sample(size, p, device=device).view(size, 1, 1)
- select_transform = select * transform + (1 - select) * eye
-
- return select_transform @ prev
-
-
-def sample_affine(p, size, height, width, device="cpu"):
- G = torch.eye(3, device=device).unsqueeze(0).repeat(size, 1, 1)
- eye = G
-
- # flip
- param = category_sample(size, (0, 1))
- Gc = scale_mat(1 - 2.0 * param, torch.ones(size), device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n')
-
- # 90 rotate
- #param = category_sample(size, (0, 3))
- #Gc = rotate_mat(-math.pi / 2 * param, device=device)
- #G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n')
-
- # integer translate
- param = uniform_sample(size, -0.125, 0.125)
- param_height = torch.round(param * height) / height
- param_width = torch.round(param * width) / width
- Gc = translate_mat(param_width, param_height, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('integer translate', G, translate_mat(param_width, param_height), sep='\n')
-
- # isotropic scale
- param = lognormal_sample(size, std=0.2 * math.log(2))
- Gc = scale_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('isotropic scale', G, scale_mat(param, param), sep='\n')
-
- p_rot = 1 - math.sqrt(1 - p)
-
- # pre-rotate
- param = uniform_sample(size, -math.pi, math.pi)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('pre-rotate', G, rotate_mat(-param), sep='\n')
-
- # anisotropic scale
- param = lognormal_sample(size, std=0.2 * math.log(2))
- Gc = scale_mat(param, 1 / param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n')
-
- # post-rotate
- param = uniform_sample(size, -math.pi, math.pi)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('post-rotate', G, rotate_mat(-param), sep='\n')
-
- # fractional translate
- param = normal_sample(size, std=0.125)
- Gc = translate_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('fractional translate', G, translate_mat(param, param), sep='\n')
-
- return G
-
-
-def sample_color(p, size):
- C = torch.eye(4).unsqueeze(0).repeat(size, 1, 1)
- eye = C
- axis_val = 1 / math.sqrt(3)
- axis = (axis_val, axis_val, axis_val)
-
- # brightness
- param = normal_sample(size, std=0.2)
- Cc = translate3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # contrast
- param = lognormal_sample(size, std=0.5 * math.log(2))
- Cc = scale3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # luma flip
- param = category_sample(size, (0, 1))
- Cc = luma_flip_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # hue rotation
- param = uniform_sample(size, -math.pi, math.pi)
- Cc = rotate3d_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # saturation
- param = lognormal_sample(size, std=1 * math.log(2))
- Cc = saturation_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- return C
-
-
-def make_grid(shape, x0, x1, y0, y1, device):
- n, c, h, w = shape
- grid = torch.empty(n, h, w, 3, device=device)
- grid[:, :, :, 0] = torch.linspace(x0, x1, w, device=device)
- grid[:, :, :, 1] = torch.linspace(y0, y1, h, device=device).unsqueeze(-1)
- grid[:, :, :, 2] = 1
-
- return grid
-
-
-def affine_grid(grid, mat):
- n, h, w, _ = grid.shape
- return (grid.view(n, h * w, 3) @ mat.transpose(1, 2)).view(n, h, w, 2)
-
-
-def get_padding(G, height, width, kernel_size):
- device = G.device
-
- cx = (width - 1) / 2
- cy = (height - 1) / 2
- cp = torch.tensor(
- [(-cx, -cy, 1), (cx, -cy, 1), (cx, cy, 1), (-cx, cy, 1)], device=device
- )
- cp = G @ cp.T
-
- pad_k = kernel_size // 4
-
- pad = cp[:, :2, :].permute(1, 0, 2).flatten(1)
- pad = torch.cat((-pad, pad)).max(1).values
- pad = pad + torch.tensor([pad_k * 2 - cx, pad_k * 2 - cy] * 2, device=device)
- pad = pad.max(torch.tensor([0, 0] * 2, device=device))
- pad = pad.min(torch.tensor([width - 1, height - 1] * 2, device=device))
-
- pad_x1, pad_y1, pad_x2, pad_y2 = pad.ceil().to(torch.int32)
-
- return pad_x1, pad_x2, pad_y1, pad_y2
-
-
-def try_sample_affine_and_pad(img, p, kernel_size, G=None):
- batch, _, height, width = img.shape
-
- G_try = G
-
- if G is None:
- G_try = torch.inverse(sample_affine(p, batch, height, width))
-
- pad_x1, pad_x2, pad_y1, pad_y2 = get_padding(G_try, height, width, kernel_size)
-
- img_pad = F.pad(img, (pad_x1, pad_x2, pad_y1, pad_y2), mode="reflect")
-
- return img_pad, G_try, (pad_x1, pad_x2, pad_y1, pad_y2)
-
-
-class GridSampleForward(autograd.Function):
- @staticmethod
- def forward(ctx, input, grid):
- out = F.grid_sample(
- input, grid, mode="bilinear", padding_mode="zeros", align_corners=False
- )
- ctx.save_for_backward(input, grid)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, grid = ctx.saved_tensors
- grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid)
-
- return grad_input, grad_grid
-
-
-class GridSampleBackward(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input, grid):
- op = torch._C._jit_get_operation("aten::grid_sampler_2d_backward")
- grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
- ctx.save_for_backward(grid)
-
- return grad_input, grad_grid
-
- @staticmethod
- def backward(ctx, grad_grad_input, grad_grad_grid):
- grid, = ctx.saved_tensors
- grad_grad_output = None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = GridSampleForward.apply(grad_grad_input, grid)
-
- return grad_grad_output, None, None
-
-
-grid_sample = GridSampleForward.apply
-
-
-def scale_mat_single(s_x, s_y):
- return torch.tensor(((s_x, 0, 0), (0, s_y, 0), (0, 0, 1)), dtype=torch.float32)
-
-
-def translate_mat_single(t_x, t_y):
- return torch.tensor(((1, 0, t_x), (0, 1, t_y), (0, 0, 1)), dtype=torch.float32)
-
-
-def random_apply_affine(img, p, G=None, antialiasing_kernel=SYM6):
- kernel = antialiasing_kernel
- len_k = len(kernel)
-
- kernel = torch.as_tensor(kernel).to(img)
- # kernel = torch.ger(kernel, kernel).to(img)
- kernel_flip = torch.flip(kernel, (0,))
-
- img_pad, G, (pad_x1, pad_x2, pad_y1, pad_y2) = try_sample_affine_and_pad(
- img, p, len_k, G
- )
-
- G_inv = (
- translate_mat_single((pad_x1 - pad_x2).item() / 2, (pad_y1 - pad_y2).item() / 2)
- @ G
- )
- up_pad = (
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- )
- img_2x = upfirdn2d(img_pad, kernel.unsqueeze(0), up=(2, 1), pad=(*up_pad[:2], 0, 0))
- img_2x = upfirdn2d(img_2x, kernel.unsqueeze(1), up=(1, 2), pad=(0, 0, *up_pad[2:]))
- G_inv = scale_mat_single(2, 2) @ G_inv @ scale_mat_single(1 / 2, 1 / 2)
- G_inv = translate_mat_single(-0.5, -0.5) @ G_inv @ translate_mat_single(0.5, 0.5)
- batch_size, channel, height, width = img.shape
- pad_k = len_k // 4
- shape = (batch_size, channel, (height + pad_k * 2) * 2, (width + pad_k * 2) * 2)
- G_inv = (
- scale_mat_single(2 / img_2x.shape[3], 2 / img_2x.shape[2])
- @ G_inv
- @ scale_mat_single(1 / (2 / shape[3]), 1 / (2 / shape[2]))
- )
- grid = F.affine_grid(G_inv[:, :2, :].to(img_2x), shape, align_corners=False)
- img_affine = grid_sample(img_2x, grid)
- d_p = -pad_k * 2
- down_pad = (
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- )
- img_down = upfirdn2d(
- img_affine, kernel_flip.unsqueeze(0), down=(2, 1), pad=(*down_pad[:2], 0, 0)
- )
- img_down = upfirdn2d(
- img_down, kernel_flip.unsqueeze(1), down=(1, 2), pad=(0, 0, *down_pad[2:])
- )
-
- return img_down, G
-
-
-def apply_color(img, mat):
- batch = img.shape[0]
- img = img.permute(0, 2, 3, 1)
- mat_mul = mat[:, :3, :3].transpose(1, 2).view(batch, 1, 3, 3)
- mat_add = mat[:, :3, 3].view(batch, 1, 1, 3)
- img = img @ mat_mul + mat_add
- img = img.permute(0, 3, 1, 2)
-
- return img
-
-
-def random_apply_color(img, p, C=None):
- if C is None:
- C = sample_color(p, img.shape[0])
-
- img = apply_color(img, C.to(img))
-
- return img, C
-
-
-def augment(img, p, transform_matrix=(None, None)):
- img, G = random_apply_affine(img, p, transform_matrix[0])
- if img.shape[1] == 3:
- img, C = random_apply_color(img, p, transform_matrix[1])
- else:
- tmp, C = random_apply_color(img[:,0:3], p, transform_matrix[1])
- img = torch.cat((tmp, img[:,3:]), dim=1)
-
- return img, (G, C)
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py
deleted file mode 100644
index cc66298a14997da4aa2efc71e37c0a6bcda53fd1..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py
+++ /dev/null
@@ -1,398 +0,0 @@
-from multiprocessing.sharedctypes import Value
-import torch
-import torch.distributed.nn
-from torch import distributed as dist, nn as nn
-from torch.nn import functional as F
-import numpy as np
-from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-
-def gather_features(
- audio_features,
- text_features,
- audio_features_mlp=None,
- text_features_mlp=None,
- local_loss=False,
- gather_with_grad=False,
- rank=0,
- world_size=1,
- use_horovod=False,
- mlp_loss=False,
-):
- if use_horovod:
- assert hvd is not None, "Please install horovod"
- if gather_with_grad:
- all_audio_features = hvd.allgather(audio_features)
- all_text_features = hvd.allgather(text_features)
- if mlp_loss:
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
- all_text_features_mlp = hvd.allgather(text_features_mlp)
- else:
- with torch.no_grad():
- all_audio_features = hvd.allgather(audio_features)
- all_text_features = hvd.allgather(text_features)
- if mlp_loss:
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
- all_text_features_mlp = hvd.allgather(text_features_mlp)
- if not local_loss:
- # ensure grads for local rank when all_* features don't have a gradient
- gathered_audio_features = list(
- all_audio_features.chunk(world_size, dim=0)
- )
- gathered_text_features = list(
- all_text_features.chunk(world_size, dim=0)
- )
- gathered_audio_features[rank] = audio_features
- gathered_text_features[rank] = text_features
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
- all_text_features = torch.cat(gathered_text_features, dim=0)
- if mlp_loss:
- gathered_audio_features_mlp = list(
- all_audio_features_mlp.chunk(world_size, dim=0)
- )
- gathered_text_features_mlp = list(
- all_text_features_mlp.chunk(world_size, dim=0)
- )
- gathered_audio_features_mlp[rank] = audio_features_mlp
- gathered_text_features_mlp[rank] = text_features_mlp
- all_audio_features_mlp = torch.cat(
- gathered_audio_features_mlp, dim=0
- )
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
- else:
- # We gather tensors from all gpus
- if gather_with_grad:
- all_audio_features = torch.cat(
- torch.distributed.nn.all_gather(audio_features), dim=0
- )
- all_text_features = torch.cat(
- torch.distributed.nn.all_gather(text_features), dim=0
- )
- if mlp_loss:
- all_audio_features_mlp = torch.cat(
- torch.distributed.nn.all_gather(audio_features_mlp), dim=0
- )
- all_text_features_mlp = torch.cat(
- torch.distributed.nn.all_gather(text_features_mlp), dim=0
- )
- else:
- gathered_audio_features = [
- torch.zeros_like(audio_features) for _ in range(world_size)
- ]
- gathered_text_features = [
- torch.zeros_like(text_features) for _ in range(world_size)
- ]
- dist.all_gather(gathered_audio_features, audio_features)
- dist.all_gather(gathered_text_features, text_features)
- if mlp_loss:
- gathered_audio_features_mlp = [
- torch.zeros_like(audio_features_mlp) for _ in range(world_size)
- ]
- gathered_text_features_mlp = [
- torch.zeros_like(text_features_mlp) for _ in range(world_size)
- ]
- dist.all_gather(gathered_audio_features_mlp, audio_features_mlp)
- dist.all_gather(gathered_text_features_mlp, text_features_mlp)
- if not local_loss:
- # ensure grads for local rank when all_* features don't have a gradient
- gathered_audio_features[rank] = audio_features
- gathered_text_features[rank] = text_features
- if mlp_loss:
- gathered_audio_features_mlp[rank] = audio_features_mlp
- gathered_text_features_mlp[rank] = text_features_mlp
-
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
- all_text_features = torch.cat(gathered_text_features, dim=0)
- if mlp_loss:
- all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0)
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
- if mlp_loss:
- return (
- all_audio_features,
- all_text_features,
- all_audio_features_mlp,
- all_text_features_mlp,
- )
- else:
- return all_audio_features, all_text_features
-
-
-class ClipLoss(nn.Module):
- def __init__(
- self,
- local_loss=False,
- gather_with_grad=False,
- cache_labels=False,
- rank=0,
- world_size=1,
- use_horovod=False,
- mlp_loss=False,
- weight_loss_kappa=0,
- ):
- super().__init__()
- self.local_loss = local_loss
- self.gather_with_grad = gather_with_grad
- self.cache_labels = cache_labels
- self.rank = rank
- self.world_size = world_size
- self.use_horovod = use_horovod
- self.mlp_loss = mlp_loss
- self.weighted_loss = bool(weight_loss_kappa != 0)
- self.weight_loss_kappa = weight_loss_kappa
- # cache state
- self.prev_num_logits = 0
- self.labels = {}
-
- def forward(
- self,
- audio_features,
- text_features,
- logit_scale_a,
- logit_scale_t=None,
- audio_features_mlp=None,
- text_features_mlp=None,
- ):
- device = audio_features.device
- if self.mlp_loss:
- if self.world_size > 1:
- (
- all_audio_features,
- all_text_features,
- all_audio_features_mlp,
- all_text_features_mlp,
- ) = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- audio_features_mlp=audio_features_mlp,
- text_features_mlp=text_features_mlp,
- local_loss=self.local_loss,
- gather_with_grad=self.gather_with_grad,
- rank=self.rank,
- world_size=self.world_size,
- use_horovod=self.use_horovod,
- mlp_loss=self.mlp_loss,
- )
- if self.local_loss:
- a_logits_per_audio = (
- logit_scale_a * audio_features @ all_text_features_mlp.T
- )
- a_logits_per_text = (
- logit_scale_a * text_features_mlp @ all_audio_features.T
- )
- t_logits_per_audio = (
- logit_scale_t * audio_features_mlp @ all_text_features.T
- )
- t_logits_per_text = (
- logit_scale_t * text_features @ all_audio_features_mlp.T
- )
- else:
- a_logits_per_audio = (
- logit_scale_a * all_audio_features @ all_text_features_mlp.T
- )
- a_logits_per_text = a_logits_per_audio.T
- t_logits_per_audio = (
- logit_scale_t * all_audio_features_mlp @ all_text_features.T
- )
- t_logits_per_text = t_logits_per_audio.T
- else:
- a_logits_per_audio = (
- logit_scale_a * audio_features @ text_features_mlp.T
- )
- a_logits_per_text = logit_scale_a * text_features_mlp @ audio_features.T
- t_logits_per_audio = (
- logit_scale_t * audio_features_mlp @ text_features.T
- )
- t_logits_per_text = logit_scale_t * text_features @ audio_features_mlp.T
-
- # calculated ground-truth and cache if enabled
- num_logits = a_logits_per_audio.shape[0]
- if self.prev_num_logits != num_logits or device not in self.labels:
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
- if self.world_size > 1 and self.local_loss:
- labels = labels + num_logits * self.rank
- if self.cache_labels:
- self.labels[device] = labels
- self.prev_num_logits = num_logits
- else:
- labels = self.labels[device]
-
- if not self.weighted_loss:
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels)
- + F.cross_entropy(a_logits_per_text, labels)
- + F.cross_entropy(t_logits_per_audio, labels)
- + F.cross_entropy(t_logits_per_text, labels)
- ) / 4
- else:
- audio_weight = (audio_features @ audio_features.T).detach()
- audio_weight = (
- torch.exp(
- torch.sum(audio_weight, axis=1)
- / (self.weight_loss_kappa * len(audio_weight))
- )
- ).detach()
- text_weight = (text_features @ text_features.T).detach()
- text_weight = (
- torch.exp(
- torch.sum(text_weight, axis=1)
- / (self.weight_loss_kappa * len(text_features))
- )
- ).detach()
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels, weight=audio_weight)
- + F.cross_entropy(a_logits_per_text, labels, weight=audio_weight)
- + F.cross_entropy(t_logits_per_audio, labels, weight=text_weight)
- + F.cross_entropy(t_logits_per_text, labels, weight=text_weight)
- ) / 4
- else:
- if self.world_size > 1:
- all_audio_features, all_text_features = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- local_loss=self.local_loss,
- gather_with_grad=self.gather_with_grad,
- rank=self.rank,
- world_size=self.world_size,
- use_horovod=self.use_horovod,
- mlp_loss=self.mlp_loss,
- )
-
- if self.local_loss:
- logits_per_audio = (
- logit_scale_a * audio_features @ all_text_features.T
- )
- logits_per_text = (
- logit_scale_a * text_features @ all_audio_features.T
- )
- else:
- logits_per_audio = (
- logit_scale_a * all_audio_features @ all_text_features.T
- )
- logits_per_text = logits_per_audio.T
- else:
- logits_per_audio = logit_scale_a * audio_features @ text_features.T
- logits_per_text = logit_scale_a * text_features @ audio_features.T
-
- # calculated ground-truth and cache if enabled
- num_logits = logits_per_audio.shape[0]
- if self.prev_num_logits != num_logits or device not in self.labels:
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
- if self.world_size > 1 and self.local_loss:
- labels = labels + num_logits * self.rank
- if self.cache_labels:
- self.labels[device] = labels
- self.prev_num_logits = num_logits
- else:
- labels = self.labels[device]
- if not self.weighted_loss:
- total_loss = (
- F.cross_entropy(logits_per_audio, labels)
- + F.cross_entropy(logits_per_text, labels)
- ) / 2
- else:
- audio_weight = (all_audio_features @ all_audio_features.T).detach()
- audio_weight = (
- torch.exp(
- torch.sum(audio_weight, axis=1)
- / (self.weight_loss_kappa * len(all_audio_features))
- )
- ).detach()
- text_weight = (all_text_features @ all_text_features.T).detach()
- text_weight = (
- torch.exp(
- torch.sum(text_weight, axis=1)
- / (self.weight_loss_kappa * len(all_text_features))
- )
- ).detach()
- total_loss = (
- F.cross_entropy(logits_per_audio, labels, weight=text_weight)
- + F.cross_entropy(logits_per_text, labels, weight=audio_weight)
- ) / 2
- return total_loss
-
-
-def lp_gather_features(pred, target, world_size=1, use_horovod=False):
- if use_horovod:
- assert hvd is not None, "Please install horovod"
- with torch.no_grad():
- all_preds = hvd.allgather(pred)
- all_targets = hvd.allgath(target)
- else:
- gathered_preds = [torch.zeros_like(pred) for _ in range(world_size)]
- gathered_targets = [torch.zeros_like(target) for _ in range(world_size)]
-
- dist.all_gather(gathered_preds, pred)
- dist.all_gather(gathered_targets, target)
- all_preds = torch.cat(gathered_preds, dim=0)
- all_targets = torch.cat(gathered_targets, dim=0)
-
- return all_preds, all_targets
-
-
-def get_map(pred, target):
- pred = torch.sigmoid(pred).numpy()
- target = target.numpy()
- return np.mean(average_precision_score(target, pred, average=None))
-
-
-def get_acc(pred, target):
- pred = torch.argmax(pred, 1).numpy()
- target = torch.argmax(target, 1).numpy()
- return accuracy_score(target, pred)
-
-
-def get_mauc(pred, target):
- pred = torch.sigmoid(pred).numpy()
- target = target.numpy()
- return np.mean(roc_auc_score(target, pred, average=None))
-
-
-class LPMetrics(object):
- def __init__(self, metric_names=["map", "acc", "mauc"]):
- self.metrics = []
- for name in metric_names:
- self.metrics.append(self.get_metric(name))
- self.metric_names = metric_names
-
- def get_metric(self, name):
- if name == "map":
- return get_map
- elif name == "acc":
- return get_acc
- elif name == "mauc":
- return get_mauc
- else:
- raise ValueError(f"the metric should be at least one of [map, acc, mauc]")
-
- def evaluate_mertics(self, pred, target):
- metric_dict = {}
- for i in range(len(self.metric_names)):
- metric_dict[self.metric_names[i]] = self.metrics[i](pred, target)
- return metric_dict
-
-
-def calc_celoss(pred, target):
- target = torch.argmax(target, 1).long()
- return nn.CrossEntropyLoss()(pred, target)
-
-
-class LPLoss(nn.Module):
- def __init__(self, loss_name):
- super().__init__()
- if loss_name == "bce":
- self.loss_func = nn.BCEWithLogitsLoss()
- elif loss_name == "ce":
- self.loss_func = calc_celoss
- elif loss_name == "mse":
- self.loss_func = nn.MSELoss()
- else:
- raise ValueError(f"the loss func should be at least one of [bce, ce, mse]")
-
- def forward(self, pred, target):
- loss = self.loss_func(pred, target)
- return loss
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageWin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageWin.py
deleted file mode 100644
index ca9b14c8adf7a7a05309e69e86465b3ddad30811..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageWin.py
+++ /dev/null
@@ -1,230 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# a Windows DIB display interface
-#
-# History:
-# 1996-05-20 fl Created
-# 1996-09-20 fl Fixed subregion exposure
-# 1997-09-21 fl Added draw primitive (for tzPrint)
-# 2003-05-21 fl Added experimental Window/ImageWindow classes
-# 2003-09-05 fl Added fromstring/tostring methods
-#
-# Copyright (c) Secret Labs AB 1997-2003.
-# Copyright (c) Fredrik Lundh 1996-2003.
-#
-# See the README file for information on usage and redistribution.
-#
-
-from . import Image
-
-
-class HDC:
- """
- Wraps an HDC integer. The resulting object can be passed to the
- :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose`
- methods.
- """
-
- def __init__(self, dc):
- self.dc = dc
-
- def __int__(self):
- return self.dc
-
-
-class HWND:
- """
- Wraps an HWND integer. The resulting object can be passed to the
- :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose`
- methods, instead of a DC.
- """
-
- def __init__(self, wnd):
- self.wnd = wnd
-
- def __int__(self):
- return self.wnd
-
-
-class Dib:
- """
- A Windows bitmap with the given mode and size. The mode can be one of "1",
- "L", "P", or "RGB".
-
- If the display requires a palette, this constructor creates a suitable
- palette and associates it with the image. For an "L" image, 128 greylevels
- are allocated. For an "RGB" image, a 6x6x6 colour cube is used, together
- with 20 greylevels.
-
- To make sure that palettes work properly under Windows, you must call the
- ``palette`` method upon certain events from Windows.
-
- :param image: Either a PIL image, or a mode string. If a mode string is
- used, a size must also be given. The mode can be one of "1",
- "L", "P", or "RGB".
- :param size: If the first argument is a mode string, this
- defines the size of the image.
- """
-
- def __init__(self, image, size=None):
- if hasattr(image, "mode") and hasattr(image, "size"):
- mode = image.mode
- size = image.size
- else:
- mode = image
- image = None
- if mode not in ["1", "L", "P", "RGB"]:
- mode = Image.getmodebase(mode)
- self.image = Image.core.display(mode, size)
- self.mode = mode
- self.size = size
- if image:
- self.paste(image)
-
- def expose(self, handle):
- """
- Copy the bitmap contents to a device context.
-
- :param handle: Device context (HDC), cast to a Python integer, or an
- HDC or HWND instance. In PythonWin, you can use
- ``CDC.GetHandleAttrib()`` to get a suitable handle.
- """
- if isinstance(handle, HWND):
- dc = self.image.getdc(handle)
- try:
- result = self.image.expose(dc)
- finally:
- self.image.releasedc(handle, dc)
- else:
- result = self.image.expose(handle)
- return result
-
- def draw(self, handle, dst, src=None):
- """
- Same as expose, but allows you to specify where to draw the image, and
- what part of it to draw.
-
- The destination and source areas are given as 4-tuple rectangles. If
- the source is omitted, the entire image is copied. If the source and
- the destination have different sizes, the image is resized as
- necessary.
- """
- if not src:
- src = (0, 0) + self.size
- if isinstance(handle, HWND):
- dc = self.image.getdc(handle)
- try:
- result = self.image.draw(dc, dst, src)
- finally:
- self.image.releasedc(handle, dc)
- else:
- result = self.image.draw(handle, dst, src)
- return result
-
- def query_palette(self, handle):
- """
- Installs the palette associated with the image in the given device
- context.
-
- This method should be called upon **QUERYNEWPALETTE** and
- **PALETTECHANGED** events from Windows. If this method returns a
- non-zero value, one or more display palette entries were changed, and
- the image should be redrawn.
-
- :param handle: Device context (HDC), cast to a Python integer, or an
- HDC or HWND instance.
- :return: A true value if one or more entries were changed (this
- indicates that the image should be redrawn).
- """
- if isinstance(handle, HWND):
- handle = self.image.getdc(handle)
- try:
- result = self.image.query_palette(handle)
- finally:
- self.image.releasedc(handle, handle)
- else:
- result = self.image.query_palette(handle)
- return result
-
- def paste(self, im, box=None):
- """
- Paste a PIL image into the bitmap image.
-
- :param im: A PIL image. The size must match the target region.
- If the mode does not match, the image is converted to the
- mode of the bitmap image.
- :param box: A 4-tuple defining the left, upper, right, and
- lower pixel coordinate. See :ref:`coordinate-system`. If
- None is given instead of a tuple, all of the image is
- assumed.
- """
- im.load()
- if self.mode != im.mode:
- im = im.convert(self.mode)
- if box:
- self.image.paste(im.im, box)
- else:
- self.image.paste(im.im)
-
- def frombytes(self, buffer):
- """
- Load display memory contents from byte data.
-
- :param buffer: A buffer containing display data (usually
- data returned from :py:func:`~PIL.ImageWin.Dib.tobytes`)
- """
- return self.image.frombytes(buffer)
-
- def tobytes(self):
- """
- Copy display memory contents to bytes object.
-
- :return: A bytes object containing display data.
- """
- return self.image.tobytes()
-
-
-class Window:
- """Create a Window with the given title size."""
-
- def __init__(self, title="PIL", width=None, height=None):
- self.hwnd = Image.core.createwindow(
- title, self.__dispatcher, width or 0, height or 0
- )
-
- def __dispatcher(self, action, *args):
- return getattr(self, "ui_handle_" + action)(*args)
-
- def ui_handle_clear(self, dc, x0, y0, x1, y1):
- pass
-
- def ui_handle_damage(self, x0, y0, x1, y1):
- pass
-
- def ui_handle_destroy(self):
- pass
-
- def ui_handle_repair(self, dc, x0, y0, x1, y1):
- pass
-
- def ui_handle_resize(self, width, height):
- pass
-
- def mainloop(self):
- Image.core.eventloop()
-
-
-class ImageWindow(Window):
- """Create an image window which displays the given image."""
-
- def __init__(self, image, title="PIL"):
- if not isinstance(image, Dib):
- image = Dib(image)
- self.image = image
- width, height = image.size
- super().__init__(title, width=width, height=height)
-
- def ui_handle_repair(self, dc, x0, y0, x1, y1):
- self.image.draw(dc, (x0, y0, x1, y1))
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/_yaml/__init__.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/_yaml/__init__.py
deleted file mode 100644
index 7baa8c4b68127d5cdf0be9a799429e61347c2694..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/_yaml/__init__.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# This is a stub package designed to roughly emulate the _yaml
-# extension module, which previously existed as a standalone module
-# and has been moved into the `yaml` package namespace.
-# It does not perfectly mimic its old counterpart, but should get
-# close enough for anyone who's relying on it even when they shouldn't.
-import yaml
-
-# in some circumstances, the yaml module we imoprted may be from a different version, so we need
-# to tread carefully when poking at it here (it may not have the attributes we expect)
-if not getattr(yaml, '__with_libyaml__', False):
- from sys import version_info
-
- exc = ModuleNotFoundError if version_info >= (3, 6) else ImportError
- raise exc("No module named '_yaml'")
-else:
- from yaml._yaml import *
- import warnings
- warnings.warn(
- 'The _yaml extension module is now located at yaml._yaml'
- ' and its location is subject to change. To use the'
- ' LibYAML-based parser and emitter, import from `yaml`:'
- ' `from yaml import CLoader as Loader, CDumper as Dumper`.',
- DeprecationWarning
- )
- del warnings
- # Don't `del yaml` here because yaml is actually an existing
- # namespace member of _yaml.
-
-__name__ = '_yaml'
-# If the module is top-level (i.e. not a part of any specific package)
-# then the attribute should be set to ''.
-# https://docs.python.org/3.8/library/types.html
-__package__ = ''
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/mask_or_segm.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/mask_or_segm.py
deleted file mode 100644
index 98b773d99fd29a48cbdfa94c5882c9c3d94003ee..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/losses/mask_or_segm.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from typing import Any, List
-import torch
-
-from detectron2.config import CfgNode
-from detectron2.structures import Instances
-
-from .mask import MaskLoss
-from .segm import SegmentationLoss
-
-
-class MaskOrSegmentationLoss:
- """
- Mask or segmentation loss as cross-entropy for raw unnormalized scores
- given ground truth labels. Ground truth labels are either defined by coarse
- segmentation annotation, or by mask annotation, depending on the config
- value MODEL.ROI_DENSEPOSE_HEAD.COARSE_SEGM_TRAINED_BY_MASKS
- """
-
- def __init__(self, cfg: CfgNode):
- """
- Initialize segmentation loss from configuration options
-
- Args:
- cfg (CfgNode): configuration options
- """
- self.segm_trained_by_masks = cfg.MODEL.ROI_DENSEPOSE_HEAD.COARSE_SEGM_TRAINED_BY_MASKS
- if self.segm_trained_by_masks:
- self.mask_loss = MaskLoss()
- self.segm_loss = SegmentationLoss(cfg)
-
- def __call__(
- self,
- proposals_with_gt: List[Instances],
- densepose_predictor_outputs: Any,
- packed_annotations: Any,
- ) -> torch.Tensor:
- """
- Compute segmentation loss as cross-entropy between aligned unnormalized
- score estimates and ground truth; with ground truth given
- either by masks, or by coarse segmentation annotations.
-
- Args:
- proposals_with_gt (list of Instances): detections with associated ground truth data
- densepose_predictor_outputs: an object of a dataclass that contains predictor outputs
- with estimated values; assumed to have the following attributes:
- * coarse_segm - coarse segmentation estimates, tensor of shape [N, D, S, S]
- packed_annotations: packed annotations for efficient loss computation
- Return:
- tensor: loss value as cross-entropy for raw unnormalized scores
- given ground truth labels
- """
- if self.segm_trained_by_masks:
- return self.mask_loss(proposals_with_gt, densepose_predictor_outputs)
- return self.segm_loss(proposals_with_gt, densepose_predictor_outputs, packed_annotations)
-
- def fake_value(self, densepose_predictor_outputs: Any) -> torch.Tensor:
- """
- Fake segmentation loss used when no suitable ground truth data
- was found in a batch. The loss has a value 0 and is primarily used to
- construct the computation graph, so that `DistributedDataParallel`
- has similar graphs on all GPUs and can perform reduction properly.
-
- Args:
- densepose_predictor_outputs: DensePose predictor outputs, an object
- of a dataclass that is assumed to have `coarse_segm`
- attribute
- Return:
- Zero value loss with proper computation graph
- """
- return densepose_predictor_outputs.coarse_segm.sum() * 0
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/chart_confidence.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/chart_confidence.py
deleted file mode 100644
index 57c63257a7c176af1522e2f143ed594c26906c76..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/chart_confidence.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from dataclasses import make_dataclass
-from functools import lru_cache
-from typing import Any, Optional
-import torch
-
-
-@lru_cache(maxsize=None)
-def decorate_predictor_output_class_with_confidences(BasePredictorOutput: type) -> type:
- """
- Create a new output class from an existing one by adding new attributes
- related to confidence estimation:
- - sigma_1 (tensor)
- - sigma_2 (tensor)
- - kappa_u (tensor)
- - kappa_v (tensor)
- - fine_segm_confidence (tensor)
- - coarse_segm_confidence (tensor)
-
- Details on confidence estimation parameters can be found in:
- N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning
- Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019
- A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020
-
- The new class inherits the provided `BasePredictorOutput` class,
- it's name is composed of the name of the provided class and
- "WithConfidences" suffix.
-
- Args:
- BasePredictorOutput (type): output type to which confidence data
- is to be added, assumed to be a dataclass
- Return:
- New dataclass derived from the provided one that has attributes
- for confidence estimation
- """
-
- PredictorOutput = make_dataclass(
- BasePredictorOutput.__name__ + "WithConfidences",
- fields=[
- ("sigma_1", Optional[torch.Tensor], None),
- ("sigma_2", Optional[torch.Tensor], None),
- ("kappa_u", Optional[torch.Tensor], None),
- ("kappa_v", Optional[torch.Tensor], None),
- ("fine_segm_confidence", Optional[torch.Tensor], None),
- ("coarse_segm_confidence", Optional[torch.Tensor], None),
- ],
- bases=(BasePredictorOutput,),
- )
-
- # add possibility to index PredictorOutput
-
- def slice_if_not_none(data, item):
- if data is None:
- return None
- if isinstance(item, int):
- return data[item].unsqueeze(0)
- return data[item]
-
- def PredictorOutput_getitem(self, item):
- PredictorOutput = type(self)
- base_predictor_output_sliced = super(PredictorOutput, self).__getitem__(item)
- return PredictorOutput(
- **base_predictor_output_sliced.__dict__,
- coarse_segm_confidence=slice_if_not_none(self.coarse_segm_confidence, item),
- fine_segm_confidence=slice_if_not_none(self.fine_segm_confidence, item),
- sigma_1=slice_if_not_none(self.sigma_1, item),
- sigma_2=slice_if_not_none(self.sigma_2, item),
- kappa_u=slice_if_not_none(self.kappa_u, item),
- kappa_v=slice_if_not_none(self.kappa_v, item),
- )
-
- PredictorOutput.__getitem__ = PredictorOutput_getitem
-
- def PredictorOutput_to(self, device: torch.device):
- """
- Transfers all tensors to the given device
- """
- PredictorOutput = type(self)
- base_predictor_output_to = super(PredictorOutput, self).to(device) # pyre-ignore[16]
-
- def to_device_if_tensor(var: Any):
- if isinstance(var, torch.Tensor):
- return var.to(device)
- return var
-
- return PredictorOutput(
- **base_predictor_output_to.__dict__,
- sigma_1=to_device_if_tensor(self.sigma_1),
- sigma_2=to_device_if_tensor(self.sigma_2),
- kappa_u=to_device_if_tensor(self.kappa_u),
- kappa_v=to_device_if_tensor(self.kappa_v),
- fine_segm_confidence=to_device_if_tensor(self.fine_segm_confidence),
- coarse_segm_confidence=to_device_if_tensor(self.coarse_segm_confidence),
- )
-
- PredictorOutput.to = PredictorOutput_to
- return PredictorOutput
diff --git a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/run_without_vae.py b/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/run_without_vae.py
deleted file mode 100644
index ebe25c76d4cfd8adf93c3886390e0f22853eddc1..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/cocktails/representation_learning/run_without_vae.py
+++ /dev/null
@@ -1,514 +0,0 @@
-import torch; torch.manual_seed(0)
-import torch.utils
-from torch.utils.data import DataLoader
-import torch.distributions
-import torch.nn as nn
-import matplotlib.pyplot as plt; plt.rcParams['figure.dpi'] = 200
-from src.cocktails.representation_learning.dataset import MyDataset, get_representation_from_ingredient, get_max_n_ingredients
-import json
-import pandas as pd
-import numpy as np
-import os
-from src.cocktails.representation_learning.multihead_model import get_multihead_model
-from src.cocktails.config import COCKTAILS_CSV_DATA, FULL_COCKTAIL_REP_PATH, EXPERIMENT_PATH
-from src.cocktails.utilities.cocktail_utilities import get_bunch_of_rep_keys
-from src.cocktails.utilities.ingredients_utilities import ingredient_profiles
-from resource import getrusage
-from resource import RUSAGE_SELF
-import gc
-gc.collect(2)
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-def get_params():
- data = pd.read_csv(COCKTAILS_CSV_DATA)
- max_ingredients, ingredient_set, liquor_set, liqueur_set = get_max_n_ingredients(data)
- num_ingredients = len(ingredient_set)
- rep_keys = get_bunch_of_rep_keys()['custom']
- ing_keys = [k.split(' ')[1] for k in rep_keys]
- ing_keys.remove('volume')
- nb_ing_categories = len(set(ingredient_profiles['type']))
- category_encodings = dict(zip(sorted(set(ingredient_profiles['type'])), np.eye(nb_ing_categories)))
-
- params = dict(trial_id='test',
- save_path=EXPERIMENT_PATH + "/multihead_model/",
- nb_epochs=500,
- print_every=50,
- plot_every=50,
- batch_size=128,
- lr=0.001,
- dropout=0.,
- nb_epoch_switch_beta=600,
- latent_dim=10,
- beta_vae=0.2,
- ing_keys=ing_keys,
- nb_ingredients=len(ingredient_set),
- hidden_dims_ingredients=[128],
- hidden_dims_cocktail=[64],
- hidden_dims_decoder=[32],
- agg='mean',
- activation='relu',
- auxiliaries_dict=dict(categories=dict(weight=5, type='classif', final_activ=None, dim_output=len(set(data['subcategory']))), #0.5
- glasses=dict(weight=0.5, type='classif', final_activ=None, dim_output=len(set(data['glass']))), #0.1
- prep_type=dict(weight=0.1, type='classif', final_activ=None, dim_output=len(set(data['category']))),#1
- cocktail_reps=dict(weight=1, type='regression', final_activ=None, dim_output=13),#1
- volume=dict(weight=1, type='regression', final_activ='relu', dim_output=1),#1
- taste_reps=dict(weight=1, type='regression', final_activ='relu', dim_output=2),#1
- ingredients_presence=dict(weight=0, type='multiclassif', final_activ=None, dim_output=num_ingredients),#10
- ingredients_quantities=dict(weight=0, type='regression', final_activ=None, dim_output=num_ingredients)),
- category_encodings=category_encodings
- )
- water_rep, indexes_to_normalize = get_representation_from_ingredient(ingredients=['water'], quantities=[1],
- max_q_per_ing=dict(zip(ingredient_set, [1] * num_ingredients)), index=0,
- params=params)
- dim_rep_ingredient = water_rep.size
- params['indexes_ing_to_normalize'] = indexes_to_normalize
- params['deepset_latent_dim'] = dim_rep_ingredient * max_ingredients
- params['dim_rep_ingredient'] = dim_rep_ingredient
- params['input_dim'] = params['nb_ingredients']
- params = compute_expe_name_and_save_path(params)
- del params['category_encodings'] # to dump
- with open(params['save_path'] + 'params.json', 'w') as f:
- json.dump(params, f)
-
- params = complete_params(params)
- return params
-
-def complete_params(params):
- data = pd.read_csv(COCKTAILS_CSV_DATA)
- cocktail_reps = np.loadtxt(FULL_COCKTAIL_REP_PATH)
- nb_ing_categories = len(set(ingredient_profiles['type']))
- category_encodings = dict(zip(sorted(set(ingredient_profiles['type'])), np.eye(nb_ing_categories)))
- params['cocktail_reps'] = cocktail_reps
- params['raw_data'] = data
- params['category_encodings'] = category_encodings
- return params
-
-def compute_losses_and_accuracies(loss_functions, auxiliaries, auxiliaries_str, outputs, data):
- losses = dict()
- accuracies = dict()
- other_metrics = dict()
- for i_k, k in enumerate(auxiliaries_str):
- # get ground truth
- # compute loss
- if k == 'volume':
- outputs[i_k] = outputs[i_k].flatten()
- ground_truth = auxiliaries[k]
- if ground_truth.dtype == torch.float64:
- losses[k] = loss_functions[k](outputs[i_k], ground_truth.float()).float()
- elif ground_truth.dtype == torch.int64:
- if str(loss_functions[k]) != "BCEWithLogitsLoss()":
- losses[k] = loss_functions[k](outputs[i_k].float(), ground_truth.long()).float()
- else:
- losses[k] = loss_functions[k](outputs[i_k].float(), ground_truth.float()).float()
- else:
- losses[k] = loss_functions[k](outputs[i_k], ground_truth).float()
- # compute accuracies
- if str(loss_functions[k]) == 'CrossEntropyLoss()':
- bs, n_options = outputs[i_k].shape
- predicted = outputs[i_k].argmax(dim=1).detach().numpy()
- true = ground_truth.int().detach().numpy()
- confusion_matrix = np.zeros([n_options, n_options])
- for i in range(bs):
- confusion_matrix[true[i], predicted[i]] += 1
- acc = confusion_matrix.diagonal().sum() / bs
- for i in range(n_options):
- if confusion_matrix[i].sum() != 0:
- confusion_matrix[i] /= confusion_matrix[i].sum()
- other_metrics[k + '_confusion'] = confusion_matrix
- accuracies[k] = np.mean(outputs[i_k].argmax(dim=1).detach().numpy() == ground_truth.int().detach().numpy())
- assert (acc - accuracies[k]) < 1e-5
-
- elif str(loss_functions[k]) == 'BCEWithLogitsLoss()':
- assert k == 'ingredients_presence'
- outputs_rescaled = outputs[i_k].detach().numpy() * data.dataset.std_ing_quantities + data.dataset.mean_ing_quantities
- predicted_presence = (outputs_rescaled > 0).astype(bool)
- presence = ground_truth.detach().numpy().astype(bool)
- other_metrics[k + '_false_positive'] = np.mean(np.logical_and(predicted_presence.astype(bool), ~presence.astype(bool)))
- other_metrics[k + '_false_negative'] = np.mean(np.logical_and(~predicted_presence.astype(bool), presence.astype(bool)))
- accuracies[k] = np.mean(predicted_presence == presence) # accuracy for multi class labeling
- elif str(loss_functions[k]) == 'MSELoss()':
- accuracies[k] = np.nan
- else:
- raise ValueError
- return losses, accuracies, other_metrics
-
-def compute_metric_output(aux_other_metrics, data, ingredient_quantities, x_hat):
- ing_q = ingredient_quantities.detach().numpy()# * data.dataset.std_ing_quantities + data.dataset.mean_ing_quantities
- ing_presence = (ing_q > 0)
- x_hat = x_hat.detach().numpy()
- # x_hat = x_hat.detach().numpy() * data.dataset.std_ing_quantities + data.dataset.mean_ing_quantities
- abs_diff = np.abs(ing_q - x_hat) * data.dataset.max_ing_quantities
- # abs_diff = np.abs(ing_q - x_hat)
- ing_q_abs_loss_when_present, ing_q_abs_loss_when_absent = [], []
- for i in range(ingredient_quantities.shape[0]):
- ing_q_abs_loss_when_present.append(np.mean(abs_diff[i, np.where(ing_presence[i])]))
- ing_q_abs_loss_when_absent.append(np.mean(abs_diff[i, np.where(~ing_presence[i])]))
- aux_other_metrics['ing_q_abs_loss_when_present'] = np.mean(ing_q_abs_loss_when_present)
- aux_other_metrics['ing_q_abs_loss_when_absent'] = np.mean(ing_q_abs_loss_when_absent)
- return aux_other_metrics
-
-def run_epoch(opt, train, model, data, loss_functions, weights, params):
- if train:
- model.train()
- else:
- model.eval()
-
- # prepare logging of losses
- losses = dict(kld_loss=[],
- mse_loss=[],
- vae_loss=[],
- volume_loss=[],
- global_loss=[])
- accuracies = dict()
- other_metrics = dict()
- for aux in params['auxiliaries_dict'].keys():
- losses[aux] = []
- accuracies[aux] = []
- if train: opt.zero_grad()
-
- for d in data:
- nb_ingredients = d[0]
- batch_size = nb_ingredients.shape[0]
- x_ingredients = d[1].float()
- ingredient_quantities = d[2]
- cocktail_reps = d[3]
- auxiliaries = d[4]
- for k in auxiliaries.keys():
- if auxiliaries[k].dtype == torch.float64: auxiliaries[k] = auxiliaries[k].float()
- taste_valid = d[-1]
- z, outputs, auxiliaries_str = model.forward(ingredient_quantities.float())
- # get auxiliary losses and accuracies
- aux_losses, aux_accuracies, aux_other_metrics = compute_losses_and_accuracies(loss_functions, auxiliaries, auxiliaries_str, outputs, data)
-
- # compute vae loss
- aux_other_metrics = compute_metric_output(aux_other_metrics, data, ingredient_quantities, outputs[auxiliaries_str.index('ingredients_quantities')])
-
- indexes_taste_valid = np.argwhere(taste_valid.detach().numpy()).flatten()
- if indexes_taste_valid.size > 0:
- outputs_taste = model.get_auxiliary(z[indexes_taste_valid], aux_str='taste_reps')
- gt = auxiliaries['taste_reps'][indexes_taste_valid]
- factor_loss = indexes_taste_valid.size / (0.3 * batch_size)# factor on the loss: if same ratio as actual dataset factor = 1 if there is less data, then the factor decreases, more data, it increases
- aux_losses['taste_reps'] = (loss_functions['taste_reps'](outputs_taste, gt) * factor_loss).float()
- else:
- aux_losses['taste_reps'] = torch.FloatTensor([0]).reshape([])
- aux_accuracies['taste_reps'] = 0
-
- # aggregate losses
- global_loss = torch.sum(torch.cat([torch.atleast_1d(aux_losses[k] * weights[k]) for k in params['auxiliaries_dict'].keys()]))
- # for k in params['auxiliaries_dict'].keys():
- # global_loss += aux_losses[k] * weights[k]
-
- if train:
- global_loss.backward()
- opt.step()
- opt.zero_grad()
-
- # logging
- losses['global_loss'].append(float(global_loss))
- for k in params['auxiliaries_dict'].keys():
- losses[k].append(float(aux_losses[k]))
- accuracies[k].append(float(aux_accuracies[k]))
- for k in aux_other_metrics.keys():
- if k not in other_metrics.keys():
- other_metrics[k] = [aux_other_metrics[k]]
- else:
- other_metrics[k].append(aux_other_metrics[k])
-
- for k in losses.keys():
- losses[k] = np.mean(losses[k])
- for k in accuracies.keys():
- accuracies[k] = np.mean(accuracies[k])
- for k in other_metrics.keys():
- other_metrics[k] = np.mean(other_metrics[k], axis=0)
- return model, losses, accuracies, other_metrics
-
-def prepare_data_and_loss(params):
- train_data = MyDataset(split='train', params=params)
- test_data = MyDataset(split='test', params=params)
-
- train_data_loader = DataLoader(train_data, batch_size=params['batch_size'], shuffle=True)
- test_data_loader = DataLoader(test_data, batch_size=params['batch_size'], shuffle=True)
-
- loss_functions = dict()
- weights = dict()
- for k in sorted(params['auxiliaries_dict'].keys()):
- if params['auxiliaries_dict'][k]['type'] == 'classif':
- if k == 'glasses':
- classif_weights = train_data.glasses_weights
- elif k == 'prep_type':
- classif_weights = train_data.prep_types_weights
- elif k == 'categories':
- classif_weights = train_data.categories_weights
- else:
- raise ValueError
- loss_functions[k] = nn.CrossEntropyLoss(torch.FloatTensor(classif_weights))
- elif params['auxiliaries_dict'][k]['type'] == 'multiclassif':
- loss_functions[k] = nn.BCEWithLogitsLoss()
- elif params['auxiliaries_dict'][k]['type'] == 'regression':
- loss_functions[k] = nn.MSELoss()
- else:
- raise ValueError
- weights[k] = params['auxiliaries_dict'][k]['weight']
-
-
- return loss_functions, train_data_loader, test_data_loader, weights
-
-def print_losses(train, losses, accuracies, other_metrics):
- keyword = 'Train' if train else 'Eval'
- print(f'\t{keyword} logs:')
- keys = ['global_loss', 'vae_loss', 'mse_loss', 'kld_loss', 'volume_loss']
- for k in keys:
- print(f'\t\t{k} - Loss: {losses[k]:.2f}')
- for k in sorted(accuracies.keys()):
- print(f'\t\t{k} (aux) - Loss: {losses[k]:.2f}, Acc: {accuracies[k]:.2f}')
- for k in sorted(other_metrics.keys()):
- if 'confusion' not in k:
- print(f'\t\t{k} - {other_metrics[k]:.2f}')
-
-
-def run_experiment(params, verbose=True):
- loss_functions, train_data_loader, test_data_loader, weights = prepare_data_and_loss(params)
-
- model_params = [params[k] for k in ["input_dim", "activation", "hidden_dims_cocktail", "latent_dim", "dropout", "auxiliaries_dict", "hidden_dims_decoder"]]
- model = get_multihead_model(*model_params)
- opt = torch.optim.AdamW(model.parameters(), lr=params['lr'])
-
-
- all_train_losses = []
- all_eval_losses = []
- all_train_accuracies = []
- all_eval_accuracies = []
- all_eval_other_metrics = []
- all_train_other_metrics = []
- best_loss = np.inf
- model, eval_losses, eval_accuracies, eval_other_metrics = run_epoch(opt=opt, train=False, model=model, data=test_data_loader, loss_functions=loss_functions,
- weights=weights, params=params)
- all_eval_losses.append(eval_losses)
- all_eval_accuracies.append(eval_accuracies)
- all_eval_other_metrics.append(eval_other_metrics)
- if verbose: print(f'\n--------\nEpoch #0')
- if verbose: print_losses(train=False, accuracies=eval_accuracies, losses=eval_losses, other_metrics=eval_other_metrics)
- for epoch in range(params['nb_epochs']):
- if verbose and (epoch + 1) % params['print_every'] == 0: print(f'\n--------\nEpoch #{epoch+1}')
- model, train_losses, train_accuracies, train_other_metrics = run_epoch(opt=opt, train=True, model=model, data=train_data_loader, loss_functions=loss_functions,
- weights=weights, params=params)
- if verbose and (epoch + 1) % params['print_every'] == 0: print_losses(train=True, accuracies=train_accuracies, losses=train_losses, other_metrics=train_other_metrics)
- model, eval_losses, eval_accuracies, eval_other_metrics = run_epoch(opt=opt, train=False, model=model, data=test_data_loader, loss_functions=loss_functions,
- weights=weights, params=params)
- if verbose and (epoch + 1) % params['print_every'] == 0: print_losses(train=False, accuracies=eval_accuracies, losses=eval_losses, other_metrics=eval_other_metrics)
- if eval_losses['global_loss'] < best_loss:
- best_loss = eval_losses['global_loss']
- if verbose: print(f'Saving new best model with loss {best_loss:.2f}')
- torch.save(model.state_dict(), params['save_path'] + f'checkpoint_best.save')
-
- # log
- all_train_losses.append(train_losses)
- all_train_accuracies.append(train_accuracies)
- all_eval_losses.append(eval_losses)
- all_eval_accuracies.append(eval_accuracies)
- all_eval_other_metrics.append(eval_other_metrics)
- all_train_other_metrics.append(train_other_metrics)
-
- # if epoch == params['nb_epoch_switch_beta']:
- # params['beta_vae'] = 2.5
- # params['auxiliaries_dict']['prep_type']['weight'] /= 10
- # params['auxiliaries_dict']['glasses']['weight'] /= 10
-
- if (epoch + 1) % params['plot_every'] == 0:
-
- plot_results(all_train_losses, all_train_accuracies, all_train_other_metrics,
- all_eval_losses, all_eval_accuracies, all_eval_other_metrics, params['plot_path'], weights)
-
- return model
-
-def plot_results(all_train_losses, all_train_accuracies, all_train_other_metrics,
- all_eval_losses, all_eval_accuracies, all_eval_other_metrics, plot_path, weights):
-
- steps = np.arange(len(all_eval_accuracies))
-
- loss_keys = sorted(all_train_losses[0].keys())
- acc_keys = sorted(all_train_accuracies[0].keys())
- metrics_keys = sorted(all_train_other_metrics[0].keys())
-
- plt.figure()
- plt.title('Train losses')
- for k in loss_keys:
- factor = 1 if k == 'mse_loss' else 1
- if k not in weights.keys():
- plt.plot(steps[1:], [train_loss[k] * factor for train_loss in all_train_losses], label=k)
- else:
- if weights[k] != 0:
- plt.plot(steps[1:], [train_loss[k] * factor for train_loss in all_train_losses], label=k)
-
- plt.legend()
- plt.ylim([0, 4])
- plt.savefig(plot_path + 'train_losses.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Train accuracies')
- for k in acc_keys:
- if weights[k] != 0:
- plt.plot(steps[1:], [train_acc[k] for train_acc in all_train_accuracies], label=k)
- plt.legend()
- plt.ylim([0, 1])
- plt.savefig(plot_path + 'train_acc.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Train other metrics')
- for k in metrics_keys:
- if 'confusion' not in k and 'presence' in k:
- plt.plot(steps[1:], [train_metric[k] for train_metric in all_train_other_metrics], label=k)
- plt.legend()
- plt.ylim([0, 1])
- plt.savefig(plot_path + 'train_ing_presence_errors.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Train other metrics')
- for k in metrics_keys:
- if 'confusion' not in k and 'presence' not in k:
- plt.plot(steps[1:], [train_metric[k] for train_metric in all_train_other_metrics], label=k)
- plt.legend()
- plt.ylim([0, 15])
- plt.savefig(plot_path + 'train_ing_q_error.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Eval losses')
- for k in loss_keys:
- factor = 1 if k == 'mse_loss' else 1
- if k not in weights.keys():
- plt.plot(steps, [eval_loss[k] * factor for eval_loss in all_eval_losses], label=k)
- else:
- if weights[k] != 0:
- plt.plot(steps, [eval_loss[k] * factor for eval_loss in all_eval_losses], label=k)
- plt.legend()
- plt.ylim([0, 4])
- plt.savefig(plot_path + 'eval_losses.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Eval accuracies')
- for k in acc_keys:
- if weights[k] != 0:
- plt.plot(steps, [eval_acc[k] for eval_acc in all_eval_accuracies], label=k)
- plt.legend()
- plt.ylim([0, 1])
- plt.savefig(plot_path + 'eval_acc.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Eval other metrics')
- for k in metrics_keys:
- if 'confusion' not in k and 'presence' in k:
- plt.plot(steps, [eval_metric[k] for eval_metric in all_eval_other_metrics], label=k)
- plt.legend()
- plt.ylim([0, 1])
- plt.savefig(plot_path + 'eval_ing_presence_errors.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.figure()
- plt.title('Eval other metrics')
- for k in metrics_keys:
- if 'confusion' not in k and 'presence' not in k:
- plt.plot(steps, [eval_metric[k] for eval_metric in all_eval_other_metrics], label=k)
- plt.legend()
- plt.ylim([0, 15])
- plt.savefig(plot_path + 'eval_ing_q_error.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
-
- for k in metrics_keys:
- if 'confusion' in k:
- plt.figure()
- plt.title(k)
- plt.ylabel('True')
- plt.xlabel('Predicted')
- plt.imshow(all_eval_other_metrics[-1][k], vmin=0, vmax=1)
- plt.colorbar()
- plt.savefig(plot_path + f'eval_{k}.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- for k in metrics_keys:
- if 'confusion' in k:
- plt.figure()
- plt.title(k)
- plt.ylabel('True')
- plt.xlabel('Predicted')
- plt.imshow(all_train_other_metrics[-1][k], vmin=0, vmax=1)
- plt.colorbar()
- plt.savefig(plot_path + f'train_{k}.png', dpi=200)
- fig = plt.gcf()
- plt.close(fig)
-
- plt.close('all')
-
-
-def get_model(model_path):
-
- with open(model_path + 'params.json', 'r') as f:
- params = json.load(f)
- params['save_path'] = model_path
- model_chkpt = model_path + "checkpoint_best.save"
- model_params = [params[k] for k in ["input_dim", "activation", "hidden_dims_cocktail", "latent_dim", "dropout", "auxiliaries_dict", "hidden_dims_decoder"]]
- model = get_multihead_model(*model_params)
- model.load_state_dict(torch.load(model_chkpt))
- model.eval()
- max_ing_quantities = np.loadtxt(model_path + 'max_ing_quantities.txt')
- def predict(ing_qs, aux_str):
- ing_qs /= max_ing_quantities
- input_model = torch.FloatTensor(ing_qs).reshape(1, -1)
- _, outputs, auxiliaries_str = model.forward(input_model, )
- if isinstance(aux_str, str):
- return outputs[auxiliaries_str.index(aux_str)].detach().numpy()
- elif isinstance(aux_str, list):
- return [outputs[auxiliaries_str.index(aux)].detach().numpy() for aux in aux_str]
- else:
- raise ValueError
- return predict, params
-
-
-def compute_expe_name_and_save_path(params):
- weights_str = '['
- for aux in params['auxiliaries_dict'].keys():
- weights_str += f'{params["auxiliaries_dict"][aux]["weight"]}, '
- weights_str = weights_str[:-2] + ']'
- save_path = params['save_path'] + params["trial_id"]
- save_path += f'_lr{params["lr"]}'
- save_path += f'_betavae{params["beta_vae"]}'
- save_path += f'_bs{params["batch_size"]}'
- save_path += f'_latentdim{params["latent_dim"]}'
- save_path += f'_hding{params["hidden_dims_ingredients"]}'
- save_path += f'_hdcocktail{params["hidden_dims_cocktail"]}'
- save_path += f'_hddecoder{params["hidden_dims_decoder"]}'
- save_path += f'_agg{params["agg"]}'
- save_path += f'_activ{params["activation"]}'
- save_path += f'_w{weights_str}'
- counter = 0
- while os.path.exists(save_path + f"_{counter}"):
- counter += 1
- save_path = save_path + f"_{counter}" + '/'
- params["save_path"] = save_path
- os.makedirs(save_path)
- os.makedirs(save_path + 'plots/')
- params['plot_path'] = save_path + 'plots/'
- print(f'logging to {save_path}')
- return params
-
-
-
-if __name__ == '__main__':
- params = get_params()
- run_experiment(params)
-
diff --git a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Dense.py b/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Dense.py
deleted file mode 100644
index e25a7d5d7c7499d482019822b5732f8a08494eae..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/models/Dense.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import torch
-from torch import Tensor
-from torch import nn
-from torch import functional as F
-from typing import Union, Tuple, List, Iterable, Dict
-import os
-import json
-from ..util import fullname, import_from_string
-
-
-class Dense(nn.Module):
- """Feed-forward function with activiation function.
-
- This layer takes a fixed-sized sentence embedding and passes it through a feed-forward layer. Can be used to generate deep averaging networs (DAN).
-
- :param in_features: Size of the input dimension
- :param out_features: Output size
- :param bias: Add a bias vector
- :param activation_function: Pytorch activation function applied on output
- :param init_weight: Initial value for the matrix of the linear layer
- :param init_bias: Initial value for the bias of the linear layer
- """
- def __init__(self, in_features: int, out_features: int, bias: bool = True, activation_function=nn.Tanh(), init_weight: Tensor = None, init_bias: Tensor = None):
- super(Dense, self).__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.bias = bias
- self.activation_function = activation_function
- self.linear = nn.Linear(in_features, out_features, bias=bias)
-
- if init_weight is not None:
- self.linear.weight = nn.Parameter(init_weight)
-
- if init_bias is not None:
- self.linear.bias = nn.Parameter(init_bias)
-
- def forward(self, features: Dict[str, Tensor]):
- features.update({'sentence_embedding': self.activation_function(self.linear(features['sentence_embedding']))})
- return features
-
- def get_sentence_embedding_dimension(self) -> int:
- return self.out_features
-
- def get_config_dict(self):
- return {'in_features': self.in_features, 'out_features': self.out_features, 'bias': self.bias, 'activation_function': fullname(self.activation_function)}
-
- def save(self, output_path):
- with open(os.path.join(output_path, 'config.json'), 'w') as fOut:
- json.dump(self.get_config_dict(), fOut)
-
- torch.save(self.state_dict(), os.path.join(output_path, 'pytorch_model.bin'))
-
- def __repr__(self):
- return "Dense({})".format(self.get_config_dict())
- @staticmethod
- def load(input_path):
- with open(os.path.join(input_path, 'config.json')) as fIn:
- config = json.load(fIn)
-
- config['activation_function'] = import_from_string(config['activation_function'])()
- model = Dense(**config)
- model.load_state_dict(torch.load(os.path.join(input_path, 'pytorch_model.bin'), map_location=torch.device('cpu')))
- return model
diff --git a/spaces/chatarena/chatarena-demo/README.md b/spaces/chatarena/chatarena-demo/README.md
deleted file mode 100644
index cfd3c0e7ebb46c5fa98f0c1ff1cfad58f566fc0d..0000000000000000000000000000000000000000
--- a/spaces/chatarena/chatarena-demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chatarena Demo
-emoji: 👀
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/run_language_modeling.py b/spaces/chendl/compositional_test/transformers/examples/legacy/run_language_modeling.py
deleted file mode 100644
index 59490f710e1338f94f11e43c3ab0dce37dee2e13..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/legacy/run_language_modeling.py
+++ /dev/null
@@ -1,375 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
-# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning the library models for language modeling on a text file (GPT, GPT-2, CTRL, BERT, RoBERTa, XLNet).
-GPT, GPT-2 and CTRL are fine-tuned using a causal language modeling (CLM) loss. BERT and RoBERTa are fine-tuned
-using a masked language modeling (MLM) loss. XLNet is fine-tuned using a permutation language modeling (PLM) loss.
-"""
-
-
-import logging
-import math
-import os
-from dataclasses import dataclass, field
-from glob import glob
-from typing import Optional
-
-from torch.utils.data import ConcatDataset
-
-import transformers
-from transformers import (
- CONFIG_MAPPING,
- MODEL_WITH_LM_HEAD_MAPPING,
- AutoConfig,
- AutoModelWithLMHead,
- AutoTokenizer,
- DataCollatorForLanguageModeling,
- DataCollatorForPermutationLanguageModeling,
- DataCollatorForWholeWordMask,
- HfArgumentParser,
- LineByLineTextDataset,
- LineByLineWithRefDataset,
- PreTrainedTokenizer,
- TextDataset,
- Trainer,
- TrainingArguments,
- set_seed,
-)
-from transformers.trainer_utils import is_main_process
-
-
-logger = logging.getLogger(__name__)
-
-
-MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
-MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
- """
-
- model_name_or_path: Optional[str] = field(
- default=None,
- metadata={
- "help": (
- "The model checkpoint for weights initialization. Leave None if you want to train a model from"
- " scratch."
- )
- },
- )
- model_type: Optional[str] = field(
- default=None,
- metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
-
- train_data_file: Optional[str] = field(
- default=None, metadata={"help": "The input training data file (a text file)."}
- )
- train_data_files: Optional[str] = field(
- default=None,
- metadata={
- "help": (
- "The input training data files (multiple files in glob format). "
- "Very often splitting large files to smaller files can prevent tokenizer going out of memory"
- )
- },
- )
- eval_data_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
- )
- train_ref_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input train ref data file for whole word mask in Chinese."},
- )
- eval_ref_file: Optional[str] = field(
- default=None,
- metadata={"help": "An optional input eval ref data file for whole word mask in Chinese."},
- )
- line_by_line: bool = field(
- default=False,
- metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
- )
-
- mlm: bool = field(
- default=False, metadata={"help": "Train with masked-language modeling loss instead of language modeling."}
- )
- whole_word_mask: bool = field(default=False, metadata={"help": "Whether ot not to use whole word mask."})
- mlm_probability: float = field(
- default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
- )
- plm_probability: float = field(
- default=1 / 6,
- metadata={
- "help": (
- "Ratio of length of a span of masked tokens to surrounding context length for permutation language"
- " modeling."
- )
- },
- )
- max_span_length: int = field(
- default=5, metadata={"help": "Maximum length of a span of masked tokens for permutation language modeling."}
- )
-
- block_size: int = field(
- default=-1,
- metadata={
- "help": (
- "Optional input sequence length after tokenization."
- "The training dataset will be truncated in block of this size for training."
- "Default to the model max input length for single sentence inputs (take into account special tokens)."
- )
- },
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
- )
-
-
-def get_dataset(
- args: DataTrainingArguments,
- tokenizer: PreTrainedTokenizer,
- evaluate: bool = False,
- cache_dir: Optional[str] = None,
-):
- def _dataset(file_path, ref_path=None):
- if args.line_by_line:
- if ref_path is not None:
- if not args.whole_word_mask or not args.mlm:
- raise ValueError("You need to set world whole masking and mlm to True for Chinese Whole Word Mask")
- return LineByLineWithRefDataset(
- tokenizer=tokenizer,
- file_path=file_path,
- block_size=args.block_size,
- ref_path=ref_path,
- )
-
- return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
- else:
- return TextDataset(
- tokenizer=tokenizer,
- file_path=file_path,
- block_size=args.block_size,
- overwrite_cache=args.overwrite_cache,
- cache_dir=cache_dir,
- )
-
- if evaluate:
- return _dataset(args.eval_data_file, args.eval_ref_file)
- elif args.train_data_files:
- return ConcatDataset([_dataset(f) for f in glob(args.train_data_files)])
- else:
- return _dataset(args.train_data_file, args.train_ref_file)
-
-
-def main():
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
-
- if data_args.eval_data_file is None and training_args.do_eval:
- raise ValueError(
- "Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file "
- "or remove the --do_eval argument."
- )
- if (
- os.path.exists(training_args.output_dir)
- and os.listdir(training_args.output_dir)
- and training_args.do_train
- and not training_args.overwrite_output_dir
- ):
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty. Use"
- " --overwrite_output_dir to overcome."
- )
-
- # Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
- )
- logger.warning(
- "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
- training_args.local_rank,
- training_args.device,
- training_args.n_gpu,
- bool(training_args.local_rank != -1),
- training_args.fp16,
- )
- # Set the verbosity to info of the Transformers logger (on main process only):
- if is_main_process(training_args.local_rank):
- transformers.utils.logging.set_verbosity_info()
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
- logger.info("Training/evaluation parameters %s", training_args)
-
- # Set seed
- set_seed(training_args.seed)
-
- # Load pretrained model and tokenizer
- #
- # Distributed training:
- # The .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
-
- if model_args.config_name:
- config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
- elif model_args.model_name_or_path:
- config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
- else:
- config = CONFIG_MAPPING[model_args.model_type]()
- logger.warning("You are instantiating a new config instance from scratch.")
-
- if model_args.tokenizer_name:
- tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
- elif model_args.model_name_or_path:
- tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
- else:
- raise ValueError(
- "You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another"
- " script, save it,and load it from here, using --tokenizer_name"
- )
-
- if model_args.model_name_or_path:
- model = AutoModelWithLMHead.from_pretrained(
- model_args.model_name_or_path,
- from_tf=bool(".ckpt" in model_args.model_name_or_path),
- config=config,
- cache_dir=model_args.cache_dir,
- )
- else:
- logger.info("Training new model from scratch")
- model = AutoModelWithLMHead.from_config(config)
-
- model.resize_token_embeddings(len(tokenizer))
-
- if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not data_args.mlm:
- raise ValueError(
- "BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the"
- "--mlm flag (masked language modeling)."
- )
-
- if data_args.block_size <= 0:
- data_args.block_size = tokenizer.max_len
- # Our input block size will be the max possible for the model
- else:
- data_args.block_size = min(data_args.block_size, tokenizer.max_len)
-
- # Get datasets
-
- train_dataset = (
- get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
- )
- eval_dataset = (
- get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)
- if training_args.do_eval
- else None
- )
- if config.model_type == "xlnet":
- data_collator = DataCollatorForPermutationLanguageModeling(
- tokenizer=tokenizer,
- plm_probability=data_args.plm_probability,
- max_span_length=data_args.max_span_length,
- )
- else:
- if data_args.mlm and data_args.whole_word_mask:
- data_collator = DataCollatorForWholeWordMask(
- tokenizer=tokenizer, mlm_probability=data_args.mlm_probability
- )
- else:
- data_collator = DataCollatorForLanguageModeling(
- tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability
- )
-
- # Initialize our Trainer
- trainer = Trainer(
- model=model,
- args=training_args,
- data_collator=data_collator,
- train_dataset=train_dataset,
- eval_dataset=eval_dataset,
- prediction_loss_only=True,
- )
-
- # Training
- if training_args.do_train:
- model_path = (
- model_args.model_name_or_path
- if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)
- else None
- )
- trainer.train(model_path=model_path)
- trainer.save_model()
- # For convenience, we also re-save the tokenizer to the same directory,
- # so that you can share your model easily on huggingface.co/models =)
- if trainer.is_world_master():
- tokenizer.save_pretrained(training_args.output_dir)
-
- # Evaluation
- results = {}
- if training_args.do_eval:
- logger.info("*** Evaluate ***")
-
- eval_output = trainer.evaluate()
-
- perplexity = math.exp(eval_output["eval_loss"])
- result = {"perplexity": perplexity}
-
- output_eval_file = os.path.join(training_args.output_dir, "eval_results_lm.txt")
- if trainer.is_world_master():
- with open(output_eval_file, "w") as writer:
- logger.info("***** Eval results *****")
- for key in sorted(result.keys()):
- logger.info(" %s = %s", key, str(result[key]))
- writer.write("%s = %s\n" % (key, str(result[key])))
-
- results.update(result)
-
- return results
-
-
-def _mp_fn(index):
- # For xla_spawn (TPUs)
- main()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/README.md
deleted file mode 100644
index b63b93862db06f23a65988907faaf3ffa2cc4d83..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/README.md
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
-# Model parallel language model training example
-
-The following example showcases how to train/fine-tune GPTNeo model with model parallelism using
-the JAX/Flax backend and the [`pjit`](https://jax.readthedocs.io/en/latest/jax.experimental.pjit.html) transformation.
-
-> Note: The example is experimental and might have bugs. Also currently it only supports single V3-8.
-
-The `partition.py` file defines the `PyTree` of `ParitionSpec` for the GPTNeo model which describes how the model will be sharded.
-The actual sharding is auto-matically handled by `pjit`. The weights are sharded across all local devices.
-To adapt the script for other models, we need to also change the `ParitionSpec` accordingly.
-
-TODO: Add more explantion.
-
-Before training, let's prepare our model first. To be able to shard the model, the sharded dimention needs to be a multiple of devices it'll be sharded on. But GPTNeo's vocab size is 50257, so we need to resize the embeddings accordingly.
-
-```python
-from transformers import FlaxGPTNeoForCausalLM, GPTNeoConfig
-model = FlaxGPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
-
-emb = jnp.zeros((50264, model.config.hidden_size))
-# update the first 50257 weights using pre-trained weights
-emb = emb.at[:50257, :].set(model.params["transformer"]["wte"]["embedding"])
-params = model.params
-params["transformer"]["wte"]["embedding"] = emb
-
-# initialize a random model with the right vocab_size
-config = GPTNeoConfig.from_pretrained("EleutherAI/gpt-neo-1.3B", vocab_size=50264)
-model = FlaxGPTNeoForCausalLM(config)
-
-# assign the pre-trained weights and save the model.
-model.params = params
-model.save_pretrained("gpt-neo-1.3B")
-```
-
-
-### Train Model
-
-```bash
-python run_clm_mp.py \
- --model_name_or_path gpt-neo-1.3B \
- --tokenizer_name gpt2 \
- --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
- --do_train --do_eval \
- --block_size 1024 \
- --num_train_epochs 5 \
- --learning_rate 4e-6 \
- --per_device_train_batch_size 3 --per_device_eval_batch_size 3 \
- --overwrite_output_dir --output_dir ~/tmp/flax-clm \
- --cache_dir ~/datasets_cache/wikitext --dtype bfloat16 \
- --logging_steps 96 --eval_steps 96
-```
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageStat.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageStat.py
deleted file mode 100644
index b7ebddf066ab6eb115a79d6bc34e31ab0c1569bd..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageStat.py
+++ /dev/null
@@ -1,148 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# global image statistics
-#
-# History:
-# 1996-04-05 fl Created
-# 1997-05-21 fl Added mask; added rms, var, stddev attributes
-# 1997-08-05 fl Added median
-# 1998-07-05 hk Fixed integer overflow error
-#
-# Notes:
-# This class shows how to implement delayed evaluation of attributes.
-# To get a certain value, simply access the corresponding attribute.
-# The __getattr__ dispatcher takes care of the rest.
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1996-97.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import functools
-import math
-import operator
-
-
-class Stat:
- def __init__(self, image_or_list, mask=None):
- try:
- if mask:
- self.h = image_or_list.histogram(mask)
- else:
- self.h = image_or_list.histogram()
- except AttributeError:
- self.h = image_or_list # assume it to be a histogram list
- if not isinstance(self.h, list):
- msg = "first argument must be image or list"
- raise TypeError(msg)
- self.bands = list(range(len(self.h) // 256))
-
- def __getattr__(self, id):
- """Calculate missing attribute"""
- if id[:4] == "_get":
- raise AttributeError(id)
- # calculate missing attribute
- v = getattr(self, "_get" + id)()
- setattr(self, id, v)
- return v
-
- def _getextrema(self):
- """Get min/max values for each band in the image"""
-
- def minmax(histogram):
- n = 255
- x = 0
- for i in range(256):
- if histogram[i]:
- n = min(n, i)
- x = max(x, i)
- return n, x # returns (255, 0) if there's no data in the histogram
-
- v = []
- for i in range(0, len(self.h), 256):
- v.append(minmax(self.h[i:]))
- return v
-
- def _getcount(self):
- """Get total number of pixels in each layer"""
-
- v = []
- for i in range(0, len(self.h), 256):
- v.append(functools.reduce(operator.add, self.h[i : i + 256]))
- return v
-
- def _getsum(self):
- """Get sum of all pixels in each layer"""
-
- v = []
- for i in range(0, len(self.h), 256):
- layer_sum = 0.0
- for j in range(256):
- layer_sum += j * self.h[i + j]
- v.append(layer_sum)
- return v
-
- def _getsum2(self):
- """Get squared sum of all pixels in each layer"""
-
- v = []
- for i in range(0, len(self.h), 256):
- sum2 = 0.0
- for j in range(256):
- sum2 += (j**2) * float(self.h[i + j])
- v.append(sum2)
- return v
-
- def _getmean(self):
- """Get average pixel level for each layer"""
-
- v = []
- for i in self.bands:
- v.append(self.sum[i] / self.count[i])
- return v
-
- def _getmedian(self):
- """Get median pixel level for each layer"""
-
- v = []
- for i in self.bands:
- s = 0
- half = self.count[i] // 2
- b = i * 256
- for j in range(256):
- s = s + self.h[b + j]
- if s > half:
- break
- v.append(j)
- return v
-
- def _getrms(self):
- """Get RMS for each layer"""
-
- v = []
- for i in self.bands:
- v.append(math.sqrt(self.sum2[i] / self.count[i]))
- return v
-
- def _getvar(self):
- """Get variance for each layer"""
-
- v = []
- for i in self.bands:
- n = self.count[i]
- v.append((self.sum2[i] - (self.sum[i] ** 2.0) / n) / n)
- return v
-
- def _getstddev(self):
- """Get standard deviation for each layer"""
-
- v = []
- for i in self.bands:
- v.append(math.sqrt(self.var[i]))
- return v
-
-
-Global = Stat # compatibility
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/__init__.py
deleted file mode 100644
index 90e2e6559da7b0e973285198d676f643e03baa69..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cffi/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-__all__ = ['FFI', 'VerificationError', 'VerificationMissing', 'CDefError',
- 'FFIError']
-
-from .api import FFI
-from .error import CDefError, FFIError, VerificationError, VerificationMissing
-from .error import PkgConfigError
-
-__version__ = "1.15.1"
-__version_info__ = (1, 15, 1)
-
-# The verifier module file names are based on the CRC32 of a string that
-# contains the following version number. It may be older than __version__
-# if nothing is clearly incompatible.
-__version_verifier_modules__ = "0.8.6"
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_persist.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_persist.py
deleted file mode 100644
index 7a1e8ca95501adf257fc0d401c1e50d999cf39fd..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_persist.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import logging
-import multiprocessing
-from multiprocessing.connection import Connection
-from typing import Generator, Callable
-from hypothesis import given
-import hypothesis.strategies as st
-import pytest
-import chromadb
-from chromadb.api import API
-from chromadb.config import Settings
-import chromadb.test.property.strategies as strategies
-import chromadb.test.property.invariants as invariants
-from chromadb.test.property.test_embeddings import (
- EmbeddingStateMachine,
- EmbeddingStateMachineStates,
-)
-from hypothesis.stateful import run_state_machine_as_test, rule, precondition
-import os
-import shutil
-import tempfile
-
-CreatePersistAPI = Callable[[], API]
-
-configurations = [
- Settings(
- chroma_api_impl="local",
- chroma_db_impl="duckdb+parquet",
- persist_directory=tempfile.gettempdir() + "/tests",
- )
-]
-
-
-@pytest.fixture(scope="module", params=configurations)
-def settings(request: pytest.FixtureRequest) -> Generator[Settings, None, None]:
- configuration = request.param
- yield configuration
- save_path = configuration.persist_directory
- # Remove if it exists
- if os.path.exists(save_path):
- shutil.rmtree(save_path)
-
-
-collection_st = st.shared(strategies.collections(with_hnsw_params=True), key="coll")
-
-
-@given(
- collection_strategy=collection_st,
- embeddings_strategy=strategies.recordsets(collection_st),
-)
-def test_persist(
- settings: Settings,
- collection_strategy: strategies.Collection,
- embeddings_strategy: strategies.RecordSet,
-) -> None:
- api_1 = chromadb.Client(settings)
- api_1.reset()
- coll = api_1.create_collection(
- name=collection_strategy.name,
- metadata=collection_strategy.metadata,
- embedding_function=collection_strategy.embedding_function,
- )
-
- if not invariants.is_metadata_valid(invariants.wrap_all(embeddings_strategy)):
- with pytest.raises(Exception):
- coll.add(**embeddings_strategy)
- return
-
- coll.add(**embeddings_strategy)
-
- invariants.count(coll, embeddings_strategy)
- invariants.metadatas_match(coll, embeddings_strategy)
- invariants.documents_match(coll, embeddings_strategy)
- invariants.ids_match(coll, embeddings_strategy)
- invariants.ann_accuracy(
- coll,
- embeddings_strategy,
- embedding_function=collection_strategy.embedding_function,
- )
-
- api_1.persist()
- del api_1
-
- api_2 = chromadb.Client(settings)
- coll = api_2.get_collection(
- name=collection_strategy.name,
- embedding_function=collection_strategy.embedding_function,
- )
- invariants.count(coll, embeddings_strategy)
- invariants.metadatas_match(coll, embeddings_strategy)
- invariants.documents_match(coll, embeddings_strategy)
- invariants.ids_match(coll, embeddings_strategy)
- invariants.ann_accuracy(
- coll,
- embeddings_strategy,
- embedding_function=collection_strategy.embedding_function,
- )
-
-
-def load_and_check(
- settings: Settings,
- collection_name: str,
- record_set: strategies.RecordSet,
- conn: Connection,
-) -> None:
- try:
- api = chromadb.Client(settings)
- coll = api.get_collection(
- name=collection_name,
- embedding_function=strategies.not_implemented_embedding_function(),
- )
- invariants.count(coll, record_set)
- invariants.metadatas_match(coll, record_set)
- invariants.documents_match(coll, record_set)
- invariants.ids_match(coll, record_set)
- invariants.ann_accuracy(coll, record_set)
- except Exception as e:
- conn.send(e)
- raise e
-
-
-class PersistEmbeddingsStateMachineStates(EmbeddingStateMachineStates):
- persist = "persist"
-
-
-class PersistEmbeddingsStateMachine(EmbeddingStateMachine):
- def __init__(self, api: API, settings: Settings):
- self.api = api
- self.settings = settings
- self.last_persist_delay = 10
- self.api.reset()
- super().__init__(self.api)
-
- @precondition(
- lambda self: len(self.record_set_state["ids"]) >= 1
- and self.last_persist_delay <= 0
- )
- @rule()
- def persist(self) -> None:
- self.on_state_change(PersistEmbeddingsStateMachineStates.persist)
- self.api.persist()
- collection_name = self.collection.name
- # Create a new process and then inside the process run the invariants
- # TODO: Once we switch off of duckdb and onto sqlite we can remove this
- ctx = multiprocessing.get_context("spawn")
- conn1, conn2 = multiprocessing.Pipe()
- p = ctx.Process(
- target=load_and_check,
- args=(self.settings, collection_name, self.record_set_state, conn2),
- )
- p.start()
- p.join()
-
- if conn1.poll():
- e = conn1.recv()
- raise e
-
- def on_state_change(self, new_state: str) -> None:
- if new_state == PersistEmbeddingsStateMachineStates.persist:
- self.last_persist_delay = 10
- else:
- self.last_persist_delay -= 1
-
-
-def test_persist_embeddings_state(
- caplog: pytest.LogCaptureFixture, settings: Settings
-) -> None:
- caplog.set_level(logging.ERROR)
- api = chromadb.Client(settings)
- run_state_machine_as_test(
- lambda: PersistEmbeddingsStateMachine(settings=settings, api=api)
- ) # type: ignore
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/__init__.py
deleted file mode 100644
index 86b9a25726d121ce9fb202f627dbab73c83e297f..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from __future__ import annotations
-
-from cryptography.__about__ import __author__, __copyright__, __version__
-
-__all__ = [
- "__version__",
- "__author__",
- "__copyright__",
-]
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/__init__.py
deleted file mode 100644
index 34ee4ebb1b19d9a6e1745689085cd80c049f20b1..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# flake8: noqa
-from dataclasses_json.api import (DataClassJsonMixin,
- dataclass_json)
-from dataclasses_json.cfg import (config, global_config,
- Exclude, LetterCase)
-from dataclasses_json.undefined import CatchAll, Undefined
-
-__all__ = ['DataClassJsonMixin', 'LetterCase', 'dataclass_json',
- 'config', 'global_config', 'Exclude',
- 'CatchAll', 'Undefined']
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/client_server.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/client_server.py
deleted file mode 100644
index ee39798e5d408ae8e0212964b85915889d821d0b..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/client_server.py
+++ /dev/null
@@ -1,91 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from multiprocessing.pool import ThreadPool
-import faiss
-from typing import List, Tuple
-
-from . import rpc
-
-############################################################
-# Server implementation
-############################################################
-
-
-class SearchServer(rpc.Server):
- """ Assign version that can be exposed via RPC """
-
- def __init__(self, s: int, index: faiss.Index):
- rpc.Server.__init__(self, s)
- self.index = index
- self.index_ivf = faiss.extract_index_ivf(index)
-
- def set_nprobe(self, nprobe: int) -> int:
- """ set nprobe field """
- self.index_ivf.nprobe = nprobe
-
- def get_ntotal(self) -> int:
- return self.index.ntotal
-
- def __getattr__(self, f):
- # all other functions get forwarded to the index
- return getattr(self.index, f)
-
-
-def run_index_server(index: faiss.Index, port: int, v6: bool = False):
- """ serve requests for that index forerver """
- rpc.run_server(
- lambda s: SearchServer(s, index),
- port, v6=v6)
-
-
-############################################################
-# Client implementation
-############################################################
-
-class ClientIndex:
- """manages a set of distance sub-indexes. The sub_indexes search a
- subset of the inverted lists. Searches are merged afterwards
- """
-
- def __init__(self, machine_ports: List[Tuple[str, int]], v6: bool = False):
- """ connect to a series of (host, port) pairs """
- self.sub_indexes = []
- for machine, port in machine_ports:
- self.sub_indexes.append(rpc.Client(machine, port, v6))
-
- self.ni = len(self.sub_indexes)
- # pool of threads. Each thread manages one sub-index.
- self.pool = ThreadPool(self.ni)
- # test connection...
- self.ntotal = self.get_ntotal()
- self.verbose = False
-
- def set_nprobe(self, nprobe: int) -> None:
- self.pool.map(
- lambda idx: idx.set_nprobe(nprobe),
- self.sub_indexes
- )
-
- def set_omp_num_threads(self, nt: int) -> None:
- self.pool.map(
- lambda idx: idx.set_omp_num_threads(nt),
- self.sub_indexes
- )
-
- def get_ntotal(self) -> None:
- return sum(self.pool.map(
- lambda idx: idx.get_ntotal(),
- self.sub_indexes
- ))
-
- def search(self, x, k: int):
-
- rh = faiss.ResultHeap(x.shape[0], k)
-
- for Di, Ii in self.pool.imap(lambda idx: idx.search(x, k), self.sub_indexes):
- rh.add_result(Di, Ii)
- rh.finalize()
- return rh.D, rh.I
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/font.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/font.py
deleted file mode 100644
index 461f5c44c33151cc89befaf16e3c0e92bb4cde93..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/font.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from __future__ import absolute_import
-
-from .base import Type
-
-
-class Woff(Type):
- """
- Implements the WOFF font type matcher.
- """
- MIME = 'application/font-woff'
- EXTENSION = 'woff'
-
- def __init__(self):
- super(Woff, self).__init__(
- mime=Woff.MIME,
- extension=Woff.EXTENSION
- )
-
- def match(self, buf):
- return (len(buf) > 7 and
- buf[0] == 0x77 and
- buf[1] == 0x4F and
- buf[2] == 0x46 and
- buf[3] == 0x46 and
- ((buf[4] == 0x00 and
- buf[5] == 0x01 and
- buf[6] == 0x00 and
- buf[7] == 0x00) or
- (buf[4] == 0x4F and
- buf[5] == 0x54 and
- buf[6] == 0x54 and
- buf[7] == 0x4F) or
- (buf[4] == 0x74 and
- buf[5] == 0x72 and
- buf[6] == 0x75 and
- buf[7] == 0x65)))
-
-
-class Woff2(Type):
- """
- Implements the WOFF2 font type matcher.
- """
- MIME = 'application/font-woff'
- EXTENSION = 'woff2'
-
- def __init__(self):
- super(Woff2, self).__init__(
- mime=Woff2.MIME,
- extension=Woff2.EXTENSION
- )
-
- def match(self, buf):
- return (len(buf) > 7 and
- buf[0] == 0x77 and
- buf[1] == 0x4F and
- buf[2] == 0x46 and
- buf[3] == 0x32 and
- ((buf[4] == 0x00 and
- buf[5] == 0x01 and
- buf[6] == 0x00 and
- buf[7] == 0x00) or
- (buf[4] == 0x4F and
- buf[5] == 0x54 and
- buf[6] == 0x54 and
- buf[7] == 0x4F) or
- (buf[4] == 0x74 and
- buf[5] == 0x72 and
- buf[6] == 0x75 and
- buf[7] == 0x65)))
-
-
-class Ttf(Type):
- """
- Implements the TTF font type matcher.
- """
- MIME = 'application/font-sfnt'
- EXTENSION = 'ttf'
-
- def __init__(self):
- super(Ttf, self).__init__(
- mime=Ttf.MIME,
- extension=Ttf.EXTENSION
- )
-
- def match(self, buf):
- return (len(buf) > 4 and
- buf[0] == 0x00 and
- buf[1] == 0x01 and
- buf[2] == 0x00 and
- buf[3] == 0x00 and
- buf[4] == 0x00)
-
-
-class Otf(Type):
- """
- Implements the OTF font type matcher.
- """
- MIME = 'application/font-sfnt'
- EXTENSION = 'otf'
-
- def __init__(self):
- super(Otf, self).__init__(
- mime=Otf.MIME,
- extension=Otf.EXTENSION
- )
-
- def match(self, buf):
- return (len(buf) > 4 and
- buf[0] == 0x4F and
- buf[1] == 0x54 and
- buf[2] == 0x54 and
- buf[3] == 0x4F and
- buf[4] == 0x00)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/__main__.py
deleted file mode 100644
index decf9ee6e50a612c65a87ebeaa8be115f1d25242..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/subset/__main__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import sys
-from fontTools.subset import main
-
-
-if __name__ == "__main__":
- sys.exit(main())
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
deleted file mode 100644
index 6c00aaf63dea48bd96e718809319f3e27c08567e..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_c_m_a_p.py
+++ /dev/null
@@ -1,1578 +0,0 @@
-from fontTools.misc.textTools import bytesjoin, safeEval, readHex
-from fontTools.misc.encodingTools import getEncoding
-from fontTools.ttLib import getSearchRange
-from fontTools.unicode import Unicode
-from . import DefaultTable
-import sys
-import struct
-import array
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-def _make_map(font, chars, gids):
- assert len(chars) == len(gids)
- glyphNames = font.getGlyphNameMany(gids)
- cmap = {}
- for char, gid, name in zip(chars, gids, glyphNames):
- if gid == 0:
- continue
- cmap[char] = name
- return cmap
-
-
-class table__c_m_a_p(DefaultTable.DefaultTable):
- """Character to Glyph Index Mapping Table
-
- This class represents the `cmap `_
- table, which maps between input characters (in Unicode or other system encodings)
- and glyphs within the font. The ``cmap`` table contains one or more subtables
- which determine the mapping of of characters to glyphs across different platforms
- and encoding systems.
-
- ``table__c_m_a_p`` objects expose an accessor ``.tables`` which provides access
- to the subtables, although it is normally easier to retrieve individual subtables
- through the utility methods described below. To add new subtables to a font,
- first determine the subtable format (if in doubt use format 4 for glyphs within
- the BMP, format 12 for glyphs outside the BMP, and format 14 for Unicode Variation
- Sequences) construct subtable objects with ``CmapSubtable.newSubtable(format)``,
- and append them to the ``.tables`` list.
-
- Within a subtable, the mapping of characters to glyphs is provided by the ``.cmap``
- attribute.
-
- Example::
-
- cmap4_0_3 = CmapSubtable.newSubtable(4)
- cmap4_0_3.platformID = 0
- cmap4_0_3.platEncID = 3
- cmap4_0_3.language = 0
- cmap4_0_3.cmap = { 0xC1: "Aacute" }
-
- cmap = newTable("cmap")
- cmap.tableVersion = 0
- cmap.tables = [cmap4_0_3]
- """
-
- def getcmap(self, platformID, platEncID):
- """Returns the first subtable which matches the given platform and encoding.
-
- Args:
- platformID (int): The platform ID. Use 0 for Unicode, 1 for Macintosh
- (deprecated for new fonts), 2 for ISO (deprecated) and 3 for Windows.
- encodingID (int): Encoding ID. Interpretation depends on the platform ID.
- See the OpenType specification for details.
-
- Returns:
- An object which is a subclass of :py:class:`CmapSubtable` if a matching
- subtable is found within the font, or ``None`` otherwise.
- """
-
- for subtable in self.tables:
- if subtable.platformID == platformID and subtable.platEncID == platEncID:
- return subtable
- return None # not found
-
- def getBestCmap(
- self,
- cmapPreferences=(
- (3, 10),
- (0, 6),
- (0, 4),
- (3, 1),
- (0, 3),
- (0, 2),
- (0, 1),
- (0, 0),
- ),
- ):
- """Returns the 'best' Unicode cmap dictionary available in the font
- or ``None``, if no Unicode cmap subtable is available.
-
- By default it will search for the following (platformID, platEncID)
- pairs in order::
-
- (3, 10), # Windows Unicode full repertoire
- (0, 6), # Unicode full repertoire (format 13 subtable)
- (0, 4), # Unicode 2.0 full repertoire
- (3, 1), # Windows Unicode BMP
- (0, 3), # Unicode 2.0 BMP
- (0, 2), # Unicode ISO/IEC 10646
- (0, 1), # Unicode 1.1
- (0, 0) # Unicode 1.0
-
- This particular order matches what HarfBuzz uses to choose what
- subtable to use by default. This order prefers the largest-repertoire
- subtable, and among those, prefers the Windows-platform over the
- Unicode-platform as the former has wider support.
-
- This order can be customized via the ``cmapPreferences`` argument.
- """
- for platformID, platEncID in cmapPreferences:
- cmapSubtable = self.getcmap(platformID, platEncID)
- if cmapSubtable is not None:
- return cmapSubtable.cmap
- return None # None of the requested cmap subtables were found
-
- def buildReversed(self):
- """Builds a reverse mapping dictionary
-
- Iterates over all Unicode cmap tables and returns a dictionary mapping
- glyphs to sets of codepoints, such as::
-
- {
- 'one': {0x31}
- 'A': {0x41,0x391}
- }
-
- The values are sets of Unicode codepoints because
- some fonts map different codepoints to the same glyph.
- For example, ``U+0041 LATIN CAPITAL LETTER A`` and ``U+0391
- GREEK CAPITAL LETTER ALPHA`` are sometimes the same glyph.
- """
- result = {}
- for subtable in self.tables:
- if subtable.isUnicode():
- for codepoint, name in subtable.cmap.items():
- result.setdefault(name, set()).add(codepoint)
- return result
-
- def decompile(self, data, ttFont):
- tableVersion, numSubTables = struct.unpack(">HH", data[:4])
- self.tableVersion = int(tableVersion)
- self.tables = tables = []
- seenOffsets = {}
- for i in range(numSubTables):
- platformID, platEncID, offset = struct.unpack(
- ">HHl", data[4 + i * 8 : 4 + (i + 1) * 8]
- )
- platformID, platEncID = int(platformID), int(platEncID)
- format, length = struct.unpack(">HH", data[offset : offset + 4])
- if format in [8, 10, 12, 13]:
- format, reserved, length = struct.unpack(
- ">HHL", data[offset : offset + 8]
- )
- elif format in [14]:
- format, length = struct.unpack(">HL", data[offset : offset + 6])
-
- if not length:
- log.error(
- "cmap subtable is reported as having zero length: platformID %s, "
- "platEncID %s, format %s offset %s. Skipping table.",
- platformID,
- platEncID,
- format,
- offset,
- )
- continue
- table = CmapSubtable.newSubtable(format)
- table.platformID = platformID
- table.platEncID = platEncID
- # Note that by default we decompile only the subtable header info;
- # any other data gets decompiled only when an attribute of the
- # subtable is referenced.
- table.decompileHeader(data[offset : offset + int(length)], ttFont)
- if offset in seenOffsets:
- table.data = None # Mark as decompiled
- table.cmap = tables[seenOffsets[offset]].cmap
- else:
- seenOffsets[offset] = i
- tables.append(table)
- if ttFont.lazy is False: # Be lazy for None and True
- self.ensureDecompiled()
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- for st in self.tables:
- st.ensureDecompiled()
-
- def compile(self, ttFont):
- self.tables.sort() # sort according to the spec; see CmapSubtable.__lt__()
- numSubTables = len(self.tables)
- totalOffset = 4 + 8 * numSubTables
- data = struct.pack(">HH", self.tableVersion, numSubTables)
- tableData = b""
- seen = (
- {}
- ) # Some tables are the same object reference. Don't compile them twice.
- done = (
- {}
- ) # Some tables are different objects, but compile to the same data chunk
- for table in self.tables:
- offset = seen.get(id(table.cmap))
- if offset is None:
- chunk = table.compile(ttFont)
- offset = done.get(chunk)
- if offset is None:
- offset = seen[id(table.cmap)] = done[chunk] = totalOffset + len(
- tableData
- )
- tableData = tableData + chunk
- data = data + struct.pack(">HHl", table.platformID, table.platEncID, offset)
- return data + tableData
-
- def toXML(self, writer, ttFont):
- writer.simpletag("tableVersion", version=self.tableVersion)
- writer.newline()
- for table in self.tables:
- table.toXML(writer, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "tableVersion":
- self.tableVersion = safeEval(attrs["version"])
- return
- if name[:12] != "cmap_format_":
- return
- if not hasattr(self, "tables"):
- self.tables = []
- format = safeEval(name[12:])
- table = CmapSubtable.newSubtable(format)
- table.platformID = safeEval(attrs["platformID"])
- table.platEncID = safeEval(attrs["platEncID"])
- table.fromXML(name, attrs, content, ttFont)
- self.tables.append(table)
-
-
-class CmapSubtable(object):
- """Base class for all cmap subtable formats.
-
- Subclasses which handle the individual subtable formats are named
- ``cmap_format_0``, ``cmap_format_2`` etc. Use :py:meth:`getSubtableClass`
- to retrieve the concrete subclass, or :py:meth:`newSubtable` to get a
- new subtable object for a given format.
-
- The object exposes a ``.cmap`` attribute, which contains a dictionary mapping
- character codepoints to glyph names.
- """
-
- @staticmethod
- def getSubtableClass(format):
- """Return the subtable class for a format."""
- return cmap_classes.get(format, cmap_format_unknown)
-
- @staticmethod
- def newSubtable(format):
- """Return a new instance of a subtable for the given format
- ."""
- subtableClass = CmapSubtable.getSubtableClass(format)
- return subtableClass(format)
-
- def __init__(self, format):
- self.format = format
- self.data = None
- self.ttFont = None
- self.platformID = None #: The platform ID of this subtable
- self.platEncID = None #: The encoding ID of this subtable (interpretation depends on ``platformID``)
- self.language = (
- None #: The language ID of this subtable (Macintosh platform only)
- )
-
- def ensureDecompiled(self, recurse=False):
- # The recurse argument is unused, but part of the signature of
- # ensureDecompiled across the library.
- if self.data is None:
- return
- self.decompile(None, None) # use saved data.
- self.data = None # Once this table has been decompiled, make sure we don't
- # just return the original data. Also avoids recursion when
- # called with an attribute that the cmap subtable doesn't have.
-
- def __getattr__(self, attr):
- # allow lazy decompilation of subtables.
- if attr[:2] == "__": # don't handle requests for member functions like '__lt__'
- raise AttributeError(attr)
- if self.data is None:
- raise AttributeError(attr)
- self.ensureDecompiled()
- return getattr(self, attr)
-
- def decompileHeader(self, data, ttFont):
- format, length, language = struct.unpack(">HHH", data[:6])
- assert (
- len(data) == length
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- format,
- len(data),
- length,
- )
- self.format = int(format)
- self.length = int(length)
- self.language = int(language)
- self.data = data[6:]
- self.ttFont = ttFont
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("language", self.language),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def getEncoding(self, default=None):
- """Returns the Python encoding name for this cmap subtable based on its platformID,
- platEncID, and language. If encoding for these values is not known, by default
- ``None`` is returned. That can be overridden by passing a value to the ``default``
- argument.
-
- Note that if you want to choose a "preferred" cmap subtable, most of the time
- ``self.isUnicode()`` is what you want as that one only returns true for the modern,
- commonly used, Unicode-compatible triplets, not the legacy ones.
- """
- return getEncoding(self.platformID, self.platEncID, self.language, default)
-
- def isUnicode(self):
- """Returns true if the characters are interpreted as Unicode codepoints."""
- return self.platformID == 0 or (
- self.platformID == 3 and self.platEncID in [0, 1, 10]
- )
-
- def isSymbol(self):
- """Returns true if the subtable is for the Symbol encoding (3,0)"""
- return self.platformID == 3 and self.platEncID == 0
-
- def _writeCodes(self, codes, writer):
- isUnicode = self.isUnicode()
- for code, name in codes:
- writer.simpletag("map", code=hex(code), name=name)
- if isUnicode:
- writer.comment(Unicode[code])
- writer.newline()
-
- def __lt__(self, other):
- if not isinstance(other, CmapSubtable):
- return NotImplemented
-
- # implemented so that list.sort() sorts according to the spec.
- selfTuple = (
- getattr(self, "platformID", None),
- getattr(self, "platEncID", None),
- getattr(self, "language", None),
- self.__dict__,
- )
- otherTuple = (
- getattr(other, "platformID", None),
- getattr(other, "platEncID", None),
- getattr(other, "language", None),
- other.__dict__,
- )
- return selfTuple < otherTuple
-
-
-class cmap_format_0(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- assert 262 == self.length, "Format 0 cmap subtable not 262 bytes"
- gids = array.array("B")
- gids.frombytes(self.data)
- charCodes = list(range(len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return struct.pack(">HHH", 0, 262, self.language) + self.data
-
- cmap = self.cmap
- assert set(cmap.keys()).issubset(range(256))
- getGlyphID = ttFont.getGlyphID
- valueList = [getGlyphID(cmap[i]) if i in cmap else 0 for i in range(256)]
-
- gids = array.array("B", valueList)
- data = struct.pack(">HHH", 0, 262, self.language) + gids.tobytes()
- assert len(data) == 262
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-subHeaderFormat = ">HHhH"
-
-
-class SubHeader(object):
- def __init__(self):
- self.firstCode = None
- self.entryCount = None
- self.idDelta = None
- self.idRangeOffset = None
- self.glyphIndexArray = []
-
-
-class cmap_format_2(CmapSubtable):
- def setIDDelta(self, subHeader):
- subHeader.idDelta = 0
- # find the minGI which is not zero.
- minGI = subHeader.glyphIndexArray[0]
- for gid in subHeader.glyphIndexArray:
- if (gid != 0) and (gid < minGI):
- minGI = gid
- # The lowest gid in glyphIndexArray, after subtracting idDelta, must be 1.
- # idDelta is a short, and must be between -32K and 32K. minGI can be between 1 and 64K.
- # We would like to pick an idDelta such that the first glyphArray GID is 1,
- # so that we are more likely to be able to combine glypharray GID subranges.
- # This means that we have a problem when minGI is > 32K
- # Since the final gi is reconstructed from the glyphArray GID by:
- # (short)finalGID = (gid + idDelta) % 0x10000),
- # we can get from a glypharray GID of 1 to a final GID of 65K by subtracting 2, and casting the
- # negative number to an unsigned short.
-
- if minGI > 1:
- if minGI > 0x7FFF:
- subHeader.idDelta = -(0x10000 - minGI) - 1
- else:
- subHeader.idDelta = minGI - 1
- idDelta = subHeader.idDelta
- for i in range(subHeader.entryCount):
- gid = subHeader.glyphIndexArray[i]
- if gid > 0:
- subHeader.glyphIndexArray[i] = gid - idDelta
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- subHeaderKeys = []
- maxSubHeaderindex = 0
- # get the key array, and determine the number of subHeaders.
- allKeys = array.array("H")
- allKeys.frombytes(data[:512])
- data = data[512:]
- if sys.byteorder != "big":
- allKeys.byteswap()
- subHeaderKeys = [key // 8 for key in allKeys]
- maxSubHeaderindex = max(subHeaderKeys)
-
- # Load subHeaders
- subHeaderList = []
- pos = 0
- for i in range(maxSubHeaderindex + 1):
- subHeader = SubHeader()
- (
- subHeader.firstCode,
- subHeader.entryCount,
- subHeader.idDelta,
- subHeader.idRangeOffset,
- ) = struct.unpack(subHeaderFormat, data[pos : pos + 8])
- pos += 8
- giDataPos = pos + subHeader.idRangeOffset - 2
- giList = array.array("H")
- giList.frombytes(data[giDataPos : giDataPos + subHeader.entryCount * 2])
- if sys.byteorder != "big":
- giList.byteswap()
- subHeader.glyphIndexArray = giList
- subHeaderList.append(subHeader)
- # How this gets processed.
- # Charcodes may be one or two bytes.
- # The first byte of a charcode is mapped through the subHeaderKeys, to select
- # a subHeader. For any subheader but 0, the next byte is then mapped through the
- # selected subheader. If subheader Index 0 is selected, then the byte itself is
- # mapped through the subheader, and there is no second byte.
- # Then assume that the subsequent byte is the first byte of the next charcode,and repeat.
- #
- # Each subheader references a range in the glyphIndexArray whose length is entryCount.
- # The range in glyphIndexArray referenced by a sunheader may overlap with the range in glyphIndexArray
- # referenced by another subheader.
- # The only subheader that will be referenced by more than one first-byte value is the subheader
- # that maps the entire range of glyphID values to glyphIndex 0, e.g notdef:
- # {firstChar 0, EntryCount 0,idDelta 0,idRangeOffset xx}
- # A byte being mapped though a subheader is treated as in index into a mapping of array index to font glyphIndex.
- # A subheader specifies a subrange within (0...256) by the
- # firstChar and EntryCount values. If the byte value is outside the subrange, then the glyphIndex is zero
- # (e.g. glyph not in font).
- # If the byte index is in the subrange, then an offset index is calculated as (byteIndex - firstChar).
- # The index to glyphIndex mapping is a subrange of the glyphIndexArray. You find the start of the subrange by
- # counting idRangeOffset bytes from the idRangeOffset word. The first value in this subrange is the
- # glyphIndex for the index firstChar. The offset index should then be used in this array to get the glyphIndex.
- # Example for Logocut-Medium
- # first byte of charcode = 129; selects subheader 1.
- # subheader 1 = {firstChar 64, EntryCount 108,idDelta 42,idRangeOffset 0252}
- # second byte of charCode = 66
- # the index offset = 66-64 = 2.
- # The subrange of the glyphIndexArray starting at 0x0252 bytes from the idRangeOffset word is:
- # [glyphIndexArray index], [subrange array index] = glyphIndex
- # [256], [0]=1 from charcode [129, 64]
- # [257], [1]=2 from charcode [129, 65]
- # [258], [2]=3 from charcode [129, 66]
- # [259], [3]=4 from charcode [129, 67]
- # So, the glyphIndex = 3 from the array. Then if idDelta is not zero and the glyph ID is not zero,
- # add it to the glyphID to get the final glyphIndex
- # value. In this case the final glyph index = 3+ 42 -> 45 for the final glyphIndex. Whew!
-
- self.data = b""
- cmap = {}
- notdefGI = 0
- for firstByte in range(256):
- subHeadindex = subHeaderKeys[firstByte]
- subHeader = subHeaderList[subHeadindex]
- if subHeadindex == 0:
- if (firstByte < subHeader.firstCode) or (
- firstByte >= subHeader.firstCode + subHeader.entryCount
- ):
- continue # gi is notdef.
- else:
- charCode = firstByte
- offsetIndex = firstByte - subHeader.firstCode
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue # gi is notdef.
- cmap[charCode] = gi
- else:
- if subHeader.entryCount:
- charCodeOffset = firstByte * 256 + subHeader.firstCode
- for offsetIndex in range(subHeader.entryCount):
- charCode = charCodeOffset + offsetIndex
- gi = subHeader.glyphIndexArray[offsetIndex]
- if gi != 0:
- gi = (gi + subHeader.idDelta) % 0x10000
- else:
- continue
- cmap[charCode] = gi
- # If not subHeader.entryCount, then all char codes with this first byte are
- # mapped to .notdef. We can skip this subtable, and leave the glyphs un-encoded, which is the
- # same as mapping it to .notdef.
-
- gids = list(cmap.values())
- charCodes = list(cmap.keys())
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- kEmptyTwoCharCodeRange = -1
- notdefGI = 0
-
- items = sorted(self.cmap.items())
- charCodes = [item[0] for item in items]
- names = [item[1] for item in items]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 2 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- # Process the (char code to gid) item list in char code order.
- # By definition, all one byte char codes map to subheader 0.
- # For all the two byte char codes, we assume that the first byte maps maps to the empty subhead (with an entry count of 0,
- # which defines all char codes in its range to map to notdef) unless proven otherwise.
- # Note that since the char code items are processed in char code order, all the char codes with the
- # same first byte are in sequential order.
-
- subHeaderKeys = [
- kEmptyTwoCharCodeRange for x in range(256)
- ] # list of indices into subHeaderList.
- subHeaderList = []
-
- # We force this subheader entry 0 to exist in the subHeaderList in the case where some one comes up
- # with a cmap where all the one byte char codes map to notdef,
- # with the result that the subhead 0 would not get created just by processing the item list.
- charCode = charCodes[0]
- if charCode > 255:
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 0
- subHeaderList.append(subHeader)
-
- lastFirstByte = -1
- items = zip(charCodes, gids)
- for charCode, gid in items:
- if gid == 0:
- continue
- firstbyte = charCode >> 8
- secondByte = charCode & 0x00FF
-
- if (
- firstbyte != lastFirstByte
- ): # Need to update the current subhead, and start a new one.
- if lastFirstByte > -1:
- # fix GI's and iDelta of current subheader.
- self.setIDDelta(subHeader)
-
- # If it was sunheader 0 for one-byte charCodes, then we need to set the subHeaderKeys value to zero
- # for the indices matching the char codes.
- if lastFirstByte == 0:
- for index in range(subHeader.entryCount):
- charCode = subHeader.firstCode + index
- subHeaderKeys[charCode] = 0
-
- assert subHeader.entryCount == len(
- subHeader.glyphIndexArray
- ), "Error - subhead entry count does not match len of glyphID subrange."
- # init new subheader
- subHeader = SubHeader()
- subHeader.firstCode = secondByte
- subHeader.entryCount = 1
- subHeader.glyphIndexArray.append(gid)
- subHeaderList.append(subHeader)
- subHeaderKeys[firstbyte] = len(subHeaderList) - 1
- lastFirstByte = firstbyte
- else:
- # need to fill in with notdefs all the code points between the last charCode and the current charCode.
- codeDiff = secondByte - (subHeader.firstCode + subHeader.entryCount)
- for i in range(codeDiff):
- subHeader.glyphIndexArray.append(notdefGI)
- subHeader.glyphIndexArray.append(gid)
- subHeader.entryCount = subHeader.entryCount + codeDiff + 1
-
- # fix GI's and iDelta of last subheader that we we added to the subheader array.
- self.setIDDelta(subHeader)
-
- # Now we add a final subheader for the subHeaderKeys which maps to empty two byte charcode ranges.
- subHeader = SubHeader()
- subHeader.firstCode = 0
- subHeader.entryCount = 0
- subHeader.idDelta = 0
- subHeader.idRangeOffset = 2
- subHeaderList.append(subHeader)
- emptySubheadIndex = len(subHeaderList) - 1
- for index in range(256):
- if subHeaderKeys[index] == kEmptyTwoCharCodeRange:
- subHeaderKeys[index] = emptySubheadIndex
- # Since this is the last subheader, the GlyphIndex Array starts two bytes after the start of the
- # idRangeOffset word of this subHeader. We can safely point to the first entry in the GlyphIndexArray,
- # since the first subrange of the GlyphIndexArray is for subHeader 0, which always starts with
- # charcode 0 and GID 0.
-
- idRangeOffset = (
- len(subHeaderList) - 1
- ) * 8 + 2 # offset to beginning of glyphIDArray from first subheader idRangeOffset.
- subheadRangeLen = (
- len(subHeaderList) - 1
- ) # skip last special empty-set subheader; we've already hardocodes its idRangeOffset to 2.
- for index in range(subheadRangeLen):
- subHeader = subHeaderList[index]
- subHeader.idRangeOffset = 0
- for j in range(index):
- prevSubhead = subHeaderList[j]
- if (
- prevSubhead.glyphIndexArray == subHeader.glyphIndexArray
- ): # use the glyphIndexArray subarray
- subHeader.idRangeOffset = (
- prevSubhead.idRangeOffset - (index - j) * 8
- )
- subHeader.glyphIndexArray = []
- break
- if subHeader.idRangeOffset == 0: # didn't find one.
- subHeader.idRangeOffset = idRangeOffset
- idRangeOffset = (
- idRangeOffset - 8
- ) + subHeader.entryCount * 2 # one less subheader, one more subArray.
- else:
- idRangeOffset = idRangeOffset - 8 # one less subheader
-
- # Now we can write out the data!
- length = (
- 6 + 512 + 8 * len(subHeaderList)
- ) # header, 256 subHeaderKeys, and subheader array.
- for subhead in subHeaderList[:-1]:
- length = (
- length + len(subhead.glyphIndexArray) * 2
- ) # We can't use subhead.entryCount, as some of the subhead may share subArrays.
- dataList = [struct.pack(">HHH", 2, length, self.language)]
- for index in subHeaderKeys:
- dataList.append(struct.pack(">H", index * 8))
- for subhead in subHeaderList:
- dataList.append(
- struct.pack(
- subHeaderFormat,
- subhead.firstCode,
- subhead.entryCount,
- subhead.idDelta,
- subhead.idRangeOffset,
- )
- )
- for subhead in subHeaderList[:-1]:
- for gi in subhead.glyphIndexArray:
- dataList.append(struct.pack(">H", gi))
- data = bytesjoin(dataList)
- assert len(data) == length, (
- "Error: cmap format 2 is not same length as calculated! actual: "
- + str(len(data))
- + " calc : "
- + str(length)
- )
- return data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-cmap_format_4_format = ">7H"
-
-# uint16 endCode[segCount] # Ending character code for each segment, last = 0xFFFF.
-# uint16 reservedPad # This value should be zero
-# uint16 startCode[segCount] # Starting character code for each segment
-# uint16 idDelta[segCount] # Delta for all character codes in segment
-# uint16 idRangeOffset[segCount] # Offset in bytes to glyph indexArray, or 0
-# uint16 glyphIndexArray[variable] # Glyph index array
-
-
-def splitRange(startCode, endCode, cmap):
- # Try to split a range of character codes into subranges with consecutive
- # glyph IDs in such a way that the cmap4 subtable can be stored "most"
- # efficiently. I can't prove I've got the optimal solution, but it seems
- # to do well with the fonts I tested: none became bigger, many became smaller.
- if startCode == endCode:
- return [], [endCode]
-
- lastID = cmap[startCode]
- lastCode = startCode
- inOrder = None
- orderedBegin = None
- subRanges = []
-
- # Gather subranges in which the glyph IDs are consecutive.
- for code in range(startCode + 1, endCode + 1):
- glyphID = cmap[code]
-
- if glyphID - 1 == lastID:
- if inOrder is None or not inOrder:
- inOrder = 1
- orderedBegin = lastCode
- else:
- if inOrder:
- inOrder = 0
- subRanges.append((orderedBegin, lastCode))
- orderedBegin = None
-
- lastID = glyphID
- lastCode = code
-
- if inOrder:
- subRanges.append((orderedBegin, lastCode))
- assert lastCode == endCode
-
- # Now filter out those new subranges that would only make the data bigger.
- # A new segment cost 8 bytes, not using a new segment costs 2 bytes per
- # character.
- newRanges = []
- for b, e in subRanges:
- if b == startCode and e == endCode:
- break # the whole range, we're fine
- if b == startCode or e == endCode:
- threshold = 4 # split costs one more segment
- else:
- threshold = 8 # split costs two more segments
- if (e - b + 1) > threshold:
- newRanges.append((b, e))
- subRanges = newRanges
-
- if not subRanges:
- return [], [endCode]
-
- if subRanges[0][0] != startCode:
- subRanges.insert(0, (startCode, subRanges[0][0] - 1))
- if subRanges[-1][1] != endCode:
- subRanges.append((subRanges[-1][1] + 1, endCode))
-
- # Fill the "holes" in the segments list -- those are the segments in which
- # the glyph IDs are _not_ consecutive.
- i = 1
- while i < len(subRanges):
- if subRanges[i - 1][1] + 1 != subRanges[i][0]:
- subRanges.insert(i, (subRanges[i - 1][1] + 1, subRanges[i][0] - 1))
- i = i + 1
- i = i + 1
-
- # Transform the ranges into startCode/endCode lists.
- start = []
- end = []
- for b, e in subRanges:
- start.append(b)
- end.append(e)
- start.pop(0)
-
- assert len(start) + 1 == len(end)
- return start, end
-
-
-class cmap_format_4(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- (segCountX2, searchRange, entrySelector, rangeShift) = struct.unpack(
- ">4H", data[:8]
- )
- data = data[8:]
- segCount = segCountX2 // 2
-
- allCodes = array.array("H")
- allCodes.frombytes(data)
- self.data = data = None
-
- if sys.byteorder != "big":
- allCodes.byteswap()
-
- # divide the data
- endCode = allCodes[:segCount]
- allCodes = allCodes[segCount + 1 :] # the +1 is skipping the reservedPad field
- startCode = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idDelta = allCodes[:segCount]
- allCodes = allCodes[segCount:]
- idRangeOffset = allCodes[:segCount]
- glyphIndexArray = allCodes[segCount:]
- lenGIArray = len(glyphIndexArray)
-
- # build 2-byte character mapping
- charCodes = []
- gids = []
- for i in range(len(startCode) - 1): # don't do 0xffff!
- start = startCode[i]
- delta = idDelta[i]
- rangeOffset = idRangeOffset[i]
- partial = rangeOffset // 2 - start + i - len(idRangeOffset)
-
- rangeCharCodes = list(range(startCode[i], endCode[i] + 1))
- charCodes.extend(rangeCharCodes)
- if rangeOffset == 0:
- gids.extend(
- [(charCode + delta) & 0xFFFF for charCode in rangeCharCodes]
- )
- else:
- for charCode in rangeCharCodes:
- index = charCode + partial
- assert index < lenGIArray, (
- "In format 4 cmap, range (%d), the calculated index (%d) into the glyph index array is not less than the length of the array (%d) !"
- % (i, index, lenGIArray)
- )
- if glyphIndexArray[index] != 0: # if not missing glyph
- glyphID = glyphIndexArray[index] + delta
- else:
- glyphID = 0 # missing glyph
- gids.append(glyphID & 0xFFFF)
-
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
-
- charCodes = list(self.cmap.keys())
- if not charCodes:
- startCode = [0xFFFF]
- endCode = [0xFFFF]
- else:
- charCodes.sort()
- names = [self.cmap[code] for code in charCodes]
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 4 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- # Build startCode and endCode lists.
- # Split the char codes in ranges of consecutive char codes, then split
- # each range in more ranges of consecutive/not consecutive glyph IDs.
- # See splitRange().
- lastCode = charCodes[0]
- endCode = []
- startCode = [lastCode]
- for charCode in charCodes[
- 1:
- ]: # skip the first code, it's the first start code
- if charCode == lastCode + 1:
- lastCode = charCode
- continue
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(charCode)
- lastCode = charCode
- start, end = splitRange(startCode[-1], lastCode, cmap)
- startCode.extend(start)
- endCode.extend(end)
- startCode.append(0xFFFF)
- endCode.append(0xFFFF)
-
- # build up rest of cruft
- idDelta = []
- idRangeOffset = []
- glyphIndexArray = []
- for i in range(len(endCode) - 1): # skip the closing codes (0xffff)
- indices = []
- for charCode in range(startCode[i], endCode[i] + 1):
- indices.append(cmap[charCode])
- if indices == list(range(indices[0], indices[0] + len(indices))):
- idDelta.append((indices[0] - startCode[i]) % 0x10000)
- idRangeOffset.append(0)
- else:
- idDelta.append(0)
- idRangeOffset.append(2 * (len(endCode) + len(glyphIndexArray) - i))
- glyphIndexArray.extend(indices)
- idDelta.append(1) # 0xffff + 1 == (tadaa!) 0. So this end code maps to .notdef
- idRangeOffset.append(0)
-
- # Insane.
- segCount = len(endCode)
- segCountX2 = segCount * 2
- searchRange, entrySelector, rangeShift = getSearchRange(segCount, 2)
-
- charCodeArray = array.array("H", endCode + [0] + startCode)
- idDeltaArray = array.array("H", idDelta)
- restArray = array.array("H", idRangeOffset + glyphIndexArray)
- if sys.byteorder != "big":
- charCodeArray.byteswap()
- if sys.byteorder != "big":
- idDeltaArray.byteswap()
- if sys.byteorder != "big":
- restArray.byteswap()
- data = charCodeArray.tobytes() + idDeltaArray.tobytes() + restArray.tobytes()
-
- length = struct.calcsize(cmap_format_4_format) + len(data)
- header = struct.pack(
- cmap_format_4_format,
- self.format,
- length,
- self.language,
- segCountX2,
- searchRange,
- entrySelector,
- rangeShift,
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- nameMap, attrsMap, dummyContent = element
- if nameMap != "map":
- assert 0, "Unrecognized keyword in cmap subtable"
- cmap[safeEval(attrsMap["code"])] = attrsMap["name"]
-
-
-class cmap_format_6(CmapSubtable):
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- firstCode, entryCount = struct.unpack(">HH", data[:4])
- firstCode = int(firstCode)
- data = data[4:]
- # assert len(data) == 2 * entryCount # XXX not true in Apple's Helvetica!!!
- gids = array.array("H")
- gids.frombytes(data[: 2 * int(entryCount)])
- if sys.byteorder != "big":
- gids.byteswap()
- self.data = data = None
-
- charCodes = list(range(firstCode, firstCode + len(gids)))
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(">HHH", self.format, self.length, self.language) + self.data
- )
- cmap = self.cmap
- codes = sorted(cmap.keys())
- if codes: # yes, there are empty cmap tables.
- codes = list(range(codes[0], codes[-1] + 1))
- firstCode = codes[0]
- valueList = [
- ttFont.getGlyphID(cmap[code]) if code in cmap else 0 for code in codes
- ]
- gids = array.array("H", valueList)
- if sys.byteorder != "big":
- gids.byteswap()
- data = gids.tobytes()
- else:
- data = b""
- firstCode = 0
- header = struct.pack(
- ">HHHHH", 6, len(data) + 10, self.language, firstCode, len(codes)
- )
- return header + data
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = safeEval(attrs["language"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12_or_13(CmapSubtable):
- def __init__(self, format):
- self.format = format
- self.reserved = 0
- self.data = None
- self.ttFont = None
-
- def decompileHeader(self, data, ttFont):
- format, reserved, length, language, nGroups = struct.unpack(">HHLLL", data[:16])
- assert (
- len(data) == (16 + nGroups * 12) == (length)
- ), "corrupt cmap table format %d (data length: %d, header length: %d)" % (
- self.format,
- len(data),
- length,
- )
- self.format = format
- self.reserved = reserved
- self.length = length
- self.language = language
- self.nGroups = nGroups
- self.data = data[16:]
- self.ttFont = ttFont
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- data = (
- self.data
- ) # decompileHeader assigns the data after the header to self.data
- charCodes = []
- gids = []
- pos = 0
- for i in range(self.nGroups):
- startCharCode, endCharCode, glyphID = struct.unpack(
- ">LLL", data[pos : pos + 12]
- )
- pos += 12
- lenGroup = 1 + endCharCode - startCharCode
- charCodes.extend(list(range(startCharCode, endCharCode + 1)))
- gids.extend(self._computeGIDs(glyphID, lenGroup))
- self.data = data = None
- self.cmap = _make_map(self.ttFont, charCodes, gids)
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- self.length,
- self.language,
- self.nGroups,
- )
- + self.data
- )
- charCodes = list(self.cmap.keys())
- names = list(self.cmap.values())
- nameMap = ttFont.getReverseGlyphMap()
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- nameMap = ttFont.getReverseGlyphMap(rebuild=True)
- try:
- gids = [nameMap[name] for name in names]
- except KeyError:
- # allow virtual GIDs in format 12 tables
- gids = []
- for name in names:
- try:
- gid = nameMap[name]
- except KeyError:
- try:
- if name[:3] == "gid":
- gid = int(name[3:])
- else:
- gid = ttFont.getGlyphID(name)
- except:
- raise KeyError(name)
-
- gids.append(gid)
-
- cmap = {} # code:glyphID mapping
- for code, gid in zip(charCodes, gids):
- cmap[code] = gid
-
- charCodes.sort()
- index = 0
- startCharCode = charCodes[0]
- startGlyphID = cmap[startCharCode]
- lastGlyphID = startGlyphID - self._format_step
- lastCharCode = startCharCode - 1
- nGroups = 0
- dataList = []
- maxIndex = len(charCodes)
- for index in range(maxIndex):
- charCode = charCodes[index]
- glyphID = cmap[charCode]
- if not self._IsInSameRun(glyphID, lastGlyphID, charCode, lastCharCode):
- dataList.append(
- struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID)
- )
- startCharCode = charCode
- startGlyphID = glyphID
- nGroups = nGroups + 1
- lastGlyphID = glyphID
- lastCharCode = charCode
- dataList.append(struct.pack(">LLL", startCharCode, lastCharCode, startGlyphID))
- nGroups = nGroups + 1
- data = bytesjoin(dataList)
- lengthSubtable = len(data) + 16
- assert len(data) == (nGroups * 12) == (lengthSubtable - 16)
- return (
- struct.pack(
- ">HHLLL",
- self.format,
- self.reserved,
- lengthSubtable,
- self.language,
- nGroups,
- )
- + data
- )
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ("format", self.format),
- ("reserved", self.reserved),
- ("length", self.length),
- ("language", self.language),
- ("nGroups", self.nGroups),
- ],
- )
- writer.newline()
- codes = sorted(self.cmap.items())
- self._writeCodes(codes, writer)
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.format = safeEval(attrs["format"])
- self.reserved = safeEval(attrs["reserved"])
- self.length = safeEval(attrs["length"])
- self.language = safeEval(attrs["language"])
- self.nGroups = safeEval(attrs["nGroups"])
- if not hasattr(self, "cmap"):
- self.cmap = {}
- cmap = self.cmap
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- cmap[safeEval(attrs["code"])] = attrs["name"]
-
-
-class cmap_format_12(cmap_format_12_or_13):
-
- _format_step = 1
-
- def __init__(self, format=12):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return list(range(startingGlyph, startingGlyph + numberOfGlyphs))
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == 1 + lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-class cmap_format_13(cmap_format_12_or_13):
-
- _format_step = 0
-
- def __init__(self, format=13):
- cmap_format_12_or_13.__init__(self, format)
-
- def _computeGIDs(self, startingGlyph, numberOfGlyphs):
- return [startingGlyph] * numberOfGlyphs
-
- def _IsInSameRun(self, glyphID, lastGlyphID, charCode, lastCharCode):
- return (glyphID == lastGlyphID) and (charCode == 1 + lastCharCode)
-
-
-def cvtToUVS(threeByteString):
- data = b"\0" + threeByteString
- (val,) = struct.unpack(">L", data)
- return val
-
-
-def cvtFromUVS(val):
- assert 0 <= val < 0x1000000
- fourByteString = struct.pack(">L", val)
- return fourByteString[1:]
-
-
-class cmap_format_14(CmapSubtable):
- def decompileHeader(self, data, ttFont):
- format, length, numVarSelectorRecords = struct.unpack(">HLL", data[:10])
- self.data = data[10:]
- self.length = length
- self.numVarSelectorRecords = numVarSelectorRecords
- self.ttFont = ttFont
- self.language = 0xFF # has no language.
-
- def decompile(self, data, ttFont):
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
- data = self.data
-
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- uvsDict = {}
- recOffset = 0
- for n in range(self.numVarSelectorRecords):
- uvs, defOVSOffset, nonDefUVSOffset = struct.unpack(
- ">3sLL", data[recOffset : recOffset + 11]
- )
- recOffset += 11
- varUVS = cvtToUVS(uvs)
- if defOVSOffset:
- startOffset = defOVSOffset - 10
- (numValues,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- for r in range(numValues):
- uv, addtlCnt = struct.unpack(
- ">3sB", data[startOffset : startOffset + 4]
- )
- startOffset += 4
- firstBaseUV = cvtToUVS(uv)
- cnt = addtlCnt + 1
- baseUVList = list(range(firstBaseUV, firstBaseUV + cnt))
- glyphList = [None] * cnt
- localUVList = zip(baseUVList, glyphList)
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = list(localUVList)
-
- if nonDefUVSOffset:
- startOffset = nonDefUVSOffset - 10
- (numRecs,) = struct.unpack(">L", data[startOffset : startOffset + 4])
- startOffset += 4
- localUVList = []
- for r in range(numRecs):
- uv, gid = struct.unpack(">3sH", data[startOffset : startOffset + 5])
- startOffset += 5
- uv = cvtToUVS(uv)
- glyphName = self.ttFont.getGlyphName(gid)
- localUVList.append((uv, glyphName))
- try:
- uvsDict[varUVS].extend(localUVList)
- except KeyError:
- uvsDict[varUVS] = localUVList
-
- self.uvsDict = uvsDict
-
- def toXML(self, writer, ttFont):
- writer.begintag(
- self.__class__.__name__,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- for uvs in uvsList:
- uvList = uvsDict[uvs]
- uvList.sort(key=lambda item: (item[1] is not None, item[0], item[1]))
- for uv, gname in uvList:
- attrs = [("uv", hex(uv)), ("uvs", hex(uvs))]
- if gname is not None:
- attrs.append(("name", gname))
- writer.simpletag("map", attrs)
- writer.newline()
- writer.endtag(self.__class__.__name__)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.language = 0xFF # provide a value so that CmapSubtable.__lt__() won't fail
- if not hasattr(self, "cmap"):
- self.cmap = (
- {}
- ) # so that clients that expect this to exist in a cmap table won't fail.
- if not hasattr(self, "uvsDict"):
- self.uvsDict = {}
- uvsDict = self.uvsDict
-
- # For backwards compatibility reasons we accept "None" as an indicator
- # for "default mapping", unless the font actually has a glyph named
- # "None".
- _hasGlyphNamedNone = None
-
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- if name != "map":
- continue
- uvs = safeEval(attrs["uvs"])
- uv = safeEval(attrs["uv"])
- gname = attrs.get("name")
- if gname == "None":
- if _hasGlyphNamedNone is None:
- _hasGlyphNamedNone = "None" in ttFont.getGlyphOrder()
- if not _hasGlyphNamedNone:
- gname = None
- try:
- uvsDict[uvs].append((uv, gname))
- except KeyError:
- uvsDict[uvs] = [(uv, gname)]
-
- def compile(self, ttFont):
- if self.data:
- return (
- struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
- + self.data
- )
-
- uvsDict = self.uvsDict
- uvsList = sorted(uvsDict.keys())
- self.numVarSelectorRecords = len(uvsList)
- offset = (
- 10 + self.numVarSelectorRecords * 11
- ) # current value is end of VarSelectorRecords block.
- data = []
- varSelectorRecords = []
- for uvs in uvsList:
- entryList = uvsDict[uvs]
-
- defList = [entry for entry in entryList if entry[1] is None]
- if defList:
- defList = [entry[0] for entry in defList]
- defOVSOffset = offset
- defList.sort()
-
- lastUV = defList[0]
- cnt = -1
- defRecs = []
- for defEntry in defList:
- cnt += 1
- if (lastUV + cnt) != defEntry:
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt - 1)
- lastUV = defEntry
- defRecs.append(rec)
- cnt = 0
-
- rec = struct.pack(">3sB", cvtFromUVS(lastUV), cnt)
- defRecs.append(rec)
-
- numDefRecs = len(defRecs)
- data.append(struct.pack(">L", numDefRecs))
- data.extend(defRecs)
- offset += 4 + numDefRecs * 4
- else:
- defOVSOffset = 0
-
- ndefList = [entry for entry in entryList if entry[1] is not None]
- if ndefList:
- nonDefUVSOffset = offset
- ndefList.sort()
- numNonDefRecs = len(ndefList)
- data.append(struct.pack(">L", numNonDefRecs))
- offset += 4 + numNonDefRecs * 5
-
- for uv, gname in ndefList:
- gid = ttFont.getGlyphID(gname)
- ndrec = struct.pack(">3sH", cvtFromUVS(uv), gid)
- data.append(ndrec)
- else:
- nonDefUVSOffset = 0
-
- vrec = struct.pack(">3sLL", cvtFromUVS(uvs), defOVSOffset, nonDefUVSOffset)
- varSelectorRecords.append(vrec)
-
- data = bytesjoin(varSelectorRecords) + bytesjoin(data)
- self.length = 10 + len(data)
- headerdata = struct.pack(
- ">HLL", self.format, self.length, self.numVarSelectorRecords
- )
-
- return headerdata + data
-
-
-class cmap_format_unknown(CmapSubtable):
- def toXML(self, writer, ttFont):
- cmapName = self.__class__.__name__[:12] + str(self.format)
- writer.begintag(
- cmapName,
- [
- ("platformID", self.platformID),
- ("platEncID", self.platEncID),
- ],
- )
- writer.newline()
- writer.dumphex(self.data)
- writer.endtag(cmapName)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- self.data = readHex(content)
- self.cmap = {}
-
- def decompileHeader(self, data, ttFont):
- self.language = 0 # dummy value
- self.data = data
-
- def decompile(self, data, ttFont):
- # we usually get here indirectly from the subtable __getattr__ function, in which case both args must be None.
- # If not, someone is calling the subtable decompile() directly, and must provide both args.
- if data is not None and ttFont is not None:
- self.decompileHeader(data, ttFont)
- else:
- assert (
- data is None and ttFont is None
- ), "Need both data and ttFont arguments"
-
- def compile(self, ttFont):
- if self.data:
- return self.data
- else:
- return None
-
-
-cmap_classes = {
- 0: cmap_format_0,
- 2: cmap_format_2,
- 4: cmap_format_4,
- 6: cmap_format_6,
- 12: cmap_format_12,
- 13: cmap_format_13,
- 14: cmap_format_14,
-}
diff --git a/spaces/cihyFjudo/fairness-paper-search/Driver Urmet Daruma Dr700 Serial How to Install and Configure the Non-Fiscal Printer.md b/spaces/cihyFjudo/fairness-paper-search/Driver Urmet Daruma Dr700 Serial How to Install and Configure the Non-Fiscal Printer.md
deleted file mode 100644
index 6b1993b3f524515ccf0da9e2f063e56cc7c5e99d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Driver Urmet Daruma Dr700 Serial How to Install and Configure the Non-Fiscal Printer.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-STK (Start Kit DARUMA) Utilizando conversor Serial/Ethernet com Mini-Impressora DR600/DR700. Neste STK mostraremos como comunicar com o conversor Serial/Ethernet e instalar o driver Genérico Somente Texto com a impressora DR600/DR700, no Windo ws. Premissas: 1. Ter uma mini-impressora modelo DR600/DR700 que não possua conexão Ethernet mais que possua a porta serial. 2. Um computado r com Windo ws XP, Windo ws Seven ou Windo ws Vista instalado . 3. Ter conversor Serial/Ethernet com pilha TCP/IP integrado . 4. Possuir um cabo de rede normal (não utilizar cabo crossover). Este STK divide-se em 4 partes: 1. Configurando o IP do conversor em um computado r local. 2. Amarrando o conversor a um IP do Servido r DHCP. 3. Como instalar a DR600/DR700 com o driver Genérico Somente Texto. 4. Como configurar o Genérico Somente Texto. 1. Configurando o IP do conversor em um computado r local. 1.1 Para configurar e efetuar testes, utilizamos um conversor da marca Comm5 (www.comm5.com.br), onde poderá também ser utilizada outra marca da preferencia, onde obrigatoriamente deverá possuir a comunicação Serial/Ethernet. Primeiramente você deverá conectar uma ponta do cabo de rede em seu computado r e a outra ponta do cabo em seu conversor. 1.2 O IP do conversor vem informado no Manual ou na Caixa onde foi enviado . O conversor utilizado para testes veio configurado com o IP 192.168.0.103. (Este endereço IP é de Fabrica para que você acesse ao conversor e configure ele pela primeira vez) 1.3 Para Configurarmos e Acessarmos ao conversor, teremos que configurar nosso endereço de IP de nosso computado r, dentro da mesma família de IP dele. Para isso, ir até o Painel de Controle / Central de Rede e Compartilhamento / Conexão Local / Propriedades / Protocolo TCP/IP Versão 4 (TCP/IP V4) / Propriedades / Colocar o Endereço de IP com o final de 001 a 254 (em nosso caso, exceto o IP do conversor, final 103) , um IP que você pode usar no seu computado r é este aqui que uso abaixo, final 102.
-Driver Urmet Daruma Dr700 Serial Download Zip –––––>>> https://tinurli.com/2uwkkT
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/EurekaLog 7.6.6.0 Enterprise with Full Source The Best Solution for Exception Handling and Memory Management in Delphi and CBuilder.md b/spaces/cihyFjudo/fairness-paper-search/EurekaLog 7.6.6.0 Enterprise with Full Source The Best Solution for Exception Handling and Memory Management in Delphi and CBuilder.md
deleted file mode 100644
index 678cb8f6b68896d45606f6eaf200c1d90dd12b76..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/EurekaLog 7.6.6.0 Enterprise with Full Source The Best Solution for Exception Handling and Memory Management in Delphi and CBuilder.md
+++ /dev/null
@@ -1,6 +0,0 @@
-EurekaLog 7.6.6.0 Enterprise with Full Source Download Zip 🆗 https://tinurli.com/2uwi1h
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/staticfiles.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/staticfiles.py
deleted file mode 100644
index 299015d4fef268cde91273790251f35192e1c8a6..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/staticfiles.py
+++ /dev/null
@@ -1 +0,0 @@
-from starlette.staticfiles import StaticFiles as StaticFiles # noqa
diff --git "a/spaces/codertoro/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/codertoro/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py"
deleted file mode 100644
index 3868885d4cd1d610bbc882ee191e6d7965c5f6ad..0000000000000000000000000000000000000000
--- "a/spaces/codertoro/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py"
+++ /dev/null
@@ -1,160 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-fast_debug = False
-
-def readPdf(pdfPath):
- """
- 读取pdf文件,返回文本内容
- """
- import pdfminer
- from pdfminer.pdfparser import PDFParser
- from pdfminer.pdfdocument import PDFDocument
- from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed
- from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
- from pdfminer.pdfdevice import PDFDevice
- from pdfminer.layout import LAParams
- from pdfminer.converter import PDFPageAggregator
-
- fp = open(pdfPath, 'rb')
-
- # Create a PDF parser object associated with the file object
- parser = PDFParser(fp)
-
- # Create a PDF document object that stores the document structure.
- # Password for initialization as 2nd parameter
- document = PDFDocument(parser)
- # Check if the document allows text extraction. If not, abort.
- if not document.is_extractable:
- raise PDFTextExtractionNotAllowed
-
- # Create a PDF resource manager object that stores shared resources.
- rsrcmgr = PDFResourceManager()
-
- # Create a PDF device object.
- # device = PDFDevice(rsrcmgr)
-
- # BEGIN LAYOUT ANALYSIS.
- # Set parameters for analysis.
- laparams = LAParams(
- char_margin=10.0,
- line_margin=0.2,
- boxes_flow=0.2,
- all_texts=False,
- )
- # Create a PDF page aggregator object.
- device = PDFPageAggregator(rsrcmgr, laparams=laparams)
- # Create a PDF interpreter object.
- interpreter = PDFPageInterpreter(rsrcmgr, device)
-
- # loop over all pages in the document
- outTextList = []
- for page in PDFPage.create_pages(document):
- # read the page into a layout object
- interpreter.process_page(page)
- layout = device.get_result()
- for obj in layout._objs:
- if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal):
- # print(obj.get_text())
- outTextList.append(obj.get_text())
-
- return outTextList
-
-
-def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os
- from bs4 import BeautifulSoup
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- if ".tex" in fp:
- with open(fp, 'r', encoding='utf-8') as f:
- file_content = f.read()
- if ".pdf" in fp.lower():
- file_content = readPdf(fp)
- file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk')
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-
-@CatchException
-def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- history = [] # 清空历史,以免输入溢出
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import pdfminer, bs4
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
-
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/asm-offsets.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/asm-offsets.h
deleted file mode 100644
index a2174b0a0899131b705392c99e82a70956f75033..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/asm-offsets.h
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Copyright (c) 2010 Mans Rullgard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_ARM_ASM_OFFSETS_H
-#define AVCODEC_ARM_ASM_OFFSETS_H
-
-/* MpegEncContext */
-#define Y_DC_SCALE 0x04
-#define C_DC_SCALE 0x08
-#define AC_PRED 0x0c
-#define BLOCK_LAST_INDEX 0x10
-#define H263_AIC 0x40
-#define INTER_SCANTAB_RASTER_END 0x88
-
-#endif /* AVCODEC_ARM_ASM_OFFSETS_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.c
deleted file mode 100644
index fb415677d993c94cd5685f9a1b85649c4cd2ba31..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dvdsub.c
+++ /dev/null
@@ -1,34 +0,0 @@
-/*
- * DVD subtitle decoding/encoding
- * Copyright (c) 2005 Fabrice Bellard
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/avstring.h"
-#include
-
-#include "dvdsub.h"
-
-void ff_dvdsub_parse_palette(uint32_t *palette, const char *p)
-{
- for (int i = 0; i < 16; i++) {
- palette[i] = strtoul(p, (char **)&p, 16);
- while (*p == ',' || av_isspace(*p))
- p++;
- }
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264idct_lasx.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264idct_lasx.c
deleted file mode 100644
index 46bd3b74d5fbbf858862d13048194604ff069abb..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/h264idct_lasx.c
+++ /dev/null
@@ -1,498 +0,0 @@
-/*
- * Loongson LASX optimized h264dsp
- *
- * Copyright (c) 2021 Loongson Technology Corporation Limited
- * Contributed by Shiyou Yin
- * Xiwei Gu
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/loongarch/loongson_intrinsics.h"
-#include "h264dsp_lasx.h"
-#include "libavcodec/bit_depth_template.c"
-
-#define AVC_ITRANS_H(in0, in1, in2, in3, out0, out1, out2, out3) \
-{ \
- __m256i tmp0_m, tmp1_m, tmp2_m, tmp3_m; \
- \
- tmp0_m = __lasx_xvadd_h(in0, in2); \
- tmp1_m = __lasx_xvsub_h(in0, in2); \
- tmp2_m = __lasx_xvsrai_h(in1, 1); \
- tmp2_m = __lasx_xvsub_h(tmp2_m, in3); \
- tmp3_m = __lasx_xvsrai_h(in3, 1); \
- tmp3_m = __lasx_xvadd_h(in1, tmp3_m); \
- \
- LASX_BUTTERFLY_4_H(tmp0_m, tmp1_m, tmp2_m, tmp3_m, \
- out0, out1, out2, out3); \
-}
-
-void ff_h264_idct_add_lasx(uint8_t *dst, int16_t *src, int32_t dst_stride)
-{
- __m256i src0_m, src1_m, src2_m, src3_m;
- __m256i dst0_m, dst1_m;
- __m256i hres0, hres1, hres2, hres3, vres0, vres1, vres2, vres3;
- __m256i inp0_m, inp1_m, res0_m, src1, src3;
- __m256i src0 = __lasx_xvld(src, 0);
- __m256i src2 = __lasx_xvld(src, 16);
- __m256i zero = __lasx_xvldi(0);
- int32_t dst_stride_2x = dst_stride << 1;
- int32_t dst_stride_3x = dst_stride_2x + dst_stride;
-
- __lasx_xvst(zero, src, 0);
- DUP2_ARG2(__lasx_xvilvh_d, src0, src0, src2, src2, src1, src3);
- AVC_ITRANS_H(src0, src1, src2, src3, hres0, hres1, hres2, hres3);
- LASX_TRANSPOSE4x4_H(hres0, hres1, hres2, hres3, hres0, hres1, hres2, hres3);
- AVC_ITRANS_H(hres0, hres1, hres2, hres3, vres0, vres1, vres2, vres3);
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, src0_m, src1_m, src2_m, src3_m);
- DUP4_ARG2(__lasx_xvld, dst, 0, dst + dst_stride, 0, dst + dst_stride_2x,
- 0, dst + dst_stride_3x, 0, src0_m, src1_m, src2_m, src3_m);
- DUP2_ARG2(__lasx_xvilvl_d, vres1, vres0, vres3, vres2, inp0_m, inp1_m);
- inp0_m = __lasx_xvpermi_q(inp1_m, inp0_m, 0x20);
- inp0_m = __lasx_xvsrari_h(inp0_m, 6);
- DUP2_ARG2(__lasx_xvilvl_w, src1_m, src0_m, src3_m, src2_m, dst0_m, dst1_m);
- dst0_m = __lasx_xvilvl_d(dst1_m, dst0_m);
- res0_m = __lasx_vext2xv_hu_bu(dst0_m);
- res0_m = __lasx_xvadd_h(res0_m, inp0_m);
- res0_m = __lasx_xvclip255_h(res0_m);
- dst0_m = __lasx_xvpickev_b(res0_m, res0_m);
- __lasx_xvstelm_w(dst0_m, dst, 0, 0);
- __lasx_xvstelm_w(dst0_m, dst + dst_stride, 0, 1);
- __lasx_xvstelm_w(dst0_m, dst + dst_stride_2x, 0, 4);
- __lasx_xvstelm_w(dst0_m, dst + dst_stride_3x, 0, 5);
-}
-
-void ff_h264_idct8_addblk_lasx(uint8_t *dst, int16_t *src,
- int32_t dst_stride)
-{
- __m256i src0, src1, src2, src3, src4, src5, src6, src7;
- __m256i vec0, vec1, vec2, vec3;
- __m256i tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
- __m256i res0, res1, res2, res3, res4, res5, res6, res7;
- __m256i dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7;
- __m256i zero = __lasx_xvldi(0);
- int32_t dst_stride_2x = dst_stride << 1;
- int32_t dst_stride_4x = dst_stride << 2;
- int32_t dst_stride_3x = dst_stride_2x + dst_stride;
-
- src[0] += 32;
- DUP4_ARG2(__lasx_xvld, src, 0, src, 16, src, 32, src, 48,
- src0, src1, src2, src3);
- DUP4_ARG2(__lasx_xvld, src, 64, src, 80, src, 96, src, 112,
- src4, src5, src6, src7);
- __lasx_xvst(zero, src, 0);
- __lasx_xvst(zero, src, 32);
- __lasx_xvst(zero, src, 64);
- __lasx_xvst(zero, src, 96);
-
- vec0 = __lasx_xvadd_h(src0, src4);
- vec1 = __lasx_xvsub_h(src0, src4);
- vec2 = __lasx_xvsrai_h(src2, 1);
- vec2 = __lasx_xvsub_h(vec2, src6);
- vec3 = __lasx_xvsrai_h(src6, 1);
- vec3 = __lasx_xvadd_h(src2, vec3);
-
- LASX_BUTTERFLY_4_H(vec0, vec1, vec2, vec3, tmp0, tmp1, tmp2, tmp3);
-
- vec0 = __lasx_xvsrai_h(src7, 1);
- vec0 = __lasx_xvsub_h(src5, vec0);
- vec0 = __lasx_xvsub_h(vec0, src3);
- vec0 = __lasx_xvsub_h(vec0, src7);
-
- vec1 = __lasx_xvsrai_h(src3, 1);
- vec1 = __lasx_xvsub_h(src1, vec1);
- vec1 = __lasx_xvadd_h(vec1, src7);
- vec1 = __lasx_xvsub_h(vec1, src3);
-
- vec2 = __lasx_xvsrai_h(src5, 1);
- vec2 = __lasx_xvsub_h(vec2, src1);
- vec2 = __lasx_xvadd_h(vec2, src7);
- vec2 = __lasx_xvadd_h(vec2, src5);
-
- vec3 = __lasx_xvsrai_h(src1, 1);
- vec3 = __lasx_xvadd_h(src3, vec3);
- vec3 = __lasx_xvadd_h(vec3, src5);
- vec3 = __lasx_xvadd_h(vec3, src1);
-
- tmp4 = __lasx_xvsrai_h(vec3, 2);
- tmp4 = __lasx_xvadd_h(tmp4, vec0);
- tmp5 = __lasx_xvsrai_h(vec2, 2);
- tmp5 = __lasx_xvadd_h(tmp5, vec1);
- tmp6 = __lasx_xvsrai_h(vec1, 2);
- tmp6 = __lasx_xvsub_h(tmp6, vec2);
- tmp7 = __lasx_xvsrai_h(vec0, 2);
- tmp7 = __lasx_xvsub_h(vec3, tmp7);
-
- LASX_BUTTERFLY_8_H(tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7,
- res0, res1, res2, res3, res4, res5, res6, res7);
- LASX_TRANSPOSE8x8_H(res0, res1, res2, res3, res4, res5, res6, res7,
- res0, res1, res2, res3, res4, res5, res6, res7);
-
- DUP4_ARG1(__lasx_vext2xv_w_h, res0, res1, res2, res3,
- tmp0, tmp1, tmp2, tmp3);
- DUP4_ARG1(__lasx_vext2xv_w_h, res4, res5, res6, res7,
- tmp4, tmp5, tmp6, tmp7);
- vec0 = __lasx_xvadd_w(tmp0, tmp4);
- vec1 = __lasx_xvsub_w(tmp0, tmp4);
-
- vec2 = __lasx_xvsrai_w(tmp2, 1);
- vec2 = __lasx_xvsub_w(vec2, tmp6);
- vec3 = __lasx_xvsrai_w(tmp6, 1);
- vec3 = __lasx_xvadd_w(vec3, tmp2);
-
- tmp0 = __lasx_xvadd_w(vec0, vec3);
- tmp2 = __lasx_xvadd_w(vec1, vec2);
- tmp4 = __lasx_xvsub_w(vec1, vec2);
- tmp6 = __lasx_xvsub_w(vec0, vec3);
-
- vec0 = __lasx_xvsrai_w(tmp7, 1);
- vec0 = __lasx_xvsub_w(tmp5, vec0);
- vec0 = __lasx_xvsub_w(vec0, tmp3);
- vec0 = __lasx_xvsub_w(vec0, tmp7);
-
- vec1 = __lasx_xvsrai_w(tmp3, 1);
- vec1 = __lasx_xvsub_w(tmp1, vec1);
- vec1 = __lasx_xvadd_w(vec1, tmp7);
- vec1 = __lasx_xvsub_w(vec1, tmp3);
-
- vec2 = __lasx_xvsrai_w(tmp5, 1);
- vec2 = __lasx_xvsub_w(vec2, tmp1);
- vec2 = __lasx_xvadd_w(vec2, tmp7);
- vec2 = __lasx_xvadd_w(vec2, tmp5);
-
- vec3 = __lasx_xvsrai_w(tmp1, 1);
- vec3 = __lasx_xvadd_w(tmp3, vec3);
- vec3 = __lasx_xvadd_w(vec3, tmp5);
- vec3 = __lasx_xvadd_w(vec3, tmp1);
-
- tmp1 = __lasx_xvsrai_w(vec3, 2);
- tmp1 = __lasx_xvadd_w(tmp1, vec0);
- tmp3 = __lasx_xvsrai_w(vec2, 2);
- tmp3 = __lasx_xvadd_w(tmp3, vec1);
- tmp5 = __lasx_xvsrai_w(vec1, 2);
- tmp5 = __lasx_xvsub_w(tmp5, vec2);
- tmp7 = __lasx_xvsrai_w(vec0, 2);
- tmp7 = __lasx_xvsub_w(vec3, tmp7);
-
- LASX_BUTTERFLY_4_W(tmp0, tmp2, tmp5, tmp7, res0, res1, res6, res7);
- LASX_BUTTERFLY_4_W(tmp4, tmp6, tmp1, tmp3, res2, res3, res4, res5);
-
- DUP4_ARG2(__lasx_xvsrai_w, res0, 6, res1, 6, res2, 6, res3, 6,
- res0, res1, res2, res3);
- DUP4_ARG2(__lasx_xvsrai_w, res4, 6, res5, 6, res6, 6, res7, 6,
- res4, res5, res6, res7);
- DUP4_ARG2(__lasx_xvpickev_h, res1, res0, res3, res2, res5, res4, res7,
- res6, res0, res1, res2, res3);
- DUP4_ARG2(__lasx_xvpermi_d, res0, 0xd8, res1, 0xd8, res2, 0xd8, res3, 0xd8,
- res0, res1, res2, res3);
-
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, dst0, dst1, dst2, dst3);
- dst += dst_stride_4x;
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, dst4, dst5, dst6, dst7);
- dst -= dst_stride_4x;
- DUP4_ARG2(__lasx_xvilvl_b, zero, dst0, zero, dst1, zero, dst2, zero, dst3,
- dst0, dst1, dst2, dst3);
- DUP4_ARG2(__lasx_xvilvl_b, zero, dst4, zero, dst5, zero, dst6, zero, dst7,
- dst4, dst5, dst6, dst7);
- DUP4_ARG3(__lasx_xvpermi_q, dst1, dst0, 0x20, dst3, dst2, 0x20, dst5,
- dst4, 0x20, dst7, dst6, 0x20, dst0, dst1, dst2, dst3);
- res0 = __lasx_xvadd_h(res0, dst0);
- res1 = __lasx_xvadd_h(res1, dst1);
- res2 = __lasx_xvadd_h(res2, dst2);
- res3 = __lasx_xvadd_h(res3, dst3);
- DUP4_ARG1(__lasx_xvclip255_h, res0, res1, res2, res3, res0, res1,
- res2, res3);
- DUP2_ARG2(__lasx_xvpickev_b, res1, res0, res3, res2, res0, res1);
- __lasx_xvstelm_d(res0, dst, 0, 0);
- __lasx_xvstelm_d(res0, dst + dst_stride, 0, 2);
- __lasx_xvstelm_d(res0, dst + dst_stride_2x, 0, 1);
- __lasx_xvstelm_d(res0, dst + dst_stride_3x, 0, 3);
- dst += dst_stride_4x;
- __lasx_xvstelm_d(res1, dst, 0, 0);
- __lasx_xvstelm_d(res1, dst + dst_stride, 0, 2);
- __lasx_xvstelm_d(res1, dst + dst_stride_2x, 0, 1);
- __lasx_xvstelm_d(res1, dst + dst_stride_3x, 0, 3);
-}
-
-void ff_h264_idct4x4_addblk_dc_lasx(uint8_t *dst, int16_t *src,
- int32_t dst_stride)
-{
- const int16_t dc = (src[0] + 32) >> 6;
- int32_t dst_stride_2x = dst_stride << 1;
- int32_t dst_stride_3x = dst_stride_2x + dst_stride;
- __m256i pred, out;
- __m256i src0, src1, src2, src3;
- __m256i input_dc = __lasx_xvreplgr2vr_h(dc);
-
- src[0] = 0;
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, src0, src1, src2, src3);
- DUP2_ARG2(__lasx_xvilvl_w, src1, src0, src3, src2, src0, src1);
-
- pred = __lasx_xvpermi_q(src0, src1, 0x02);
- pred = __lasx_xvaddw_h_h_bu(input_dc, pred);
- pred = __lasx_xvclip255_h(pred);
- out = __lasx_xvpickev_b(pred, pred);
- __lasx_xvstelm_w(out, dst, 0, 0);
- __lasx_xvstelm_w(out, dst + dst_stride, 0, 1);
- __lasx_xvstelm_w(out, dst + dst_stride_2x, 0, 4);
- __lasx_xvstelm_w(out, dst + dst_stride_3x, 0, 5);
-}
-
-void ff_h264_idct8_dc_addblk_lasx(uint8_t *dst, int16_t *src,
- int32_t dst_stride)
-{
- int32_t dc_val;
- int32_t dst_stride_2x = dst_stride << 1;
- int32_t dst_stride_4x = dst_stride << 2;
- int32_t dst_stride_3x = dst_stride_2x + dst_stride;
- __m256i dst0, dst1, dst2, dst3, dst4, dst5, dst6, dst7;
- __m256i dc;
-
- dc_val = (src[0] + 32) >> 6;
- dc = __lasx_xvreplgr2vr_h(dc_val);
-
- src[0] = 0;
-
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, dst0, dst1, dst2, dst3);
- dst += dst_stride_4x;
- DUP4_ARG2(__lasx_xvldx, dst, 0, dst, dst_stride, dst, dst_stride_2x,
- dst, dst_stride_3x, dst4, dst5, dst6, dst7);
- dst -= dst_stride_4x;
- DUP4_ARG1(__lasx_vext2xv_hu_bu, dst0, dst1, dst2, dst3,
- dst0, dst1, dst2, dst3);
- DUP4_ARG1(__lasx_vext2xv_hu_bu, dst4, dst5, dst6, dst7,
- dst4, dst5, dst6, dst7);
- DUP4_ARG3(__lasx_xvpermi_q, dst1, dst0, 0x20, dst3, dst2, 0x20, dst5,
- dst4, 0x20, dst7, dst6, 0x20, dst0, dst1, dst2, dst3);
- dst0 = __lasx_xvadd_h(dst0, dc);
- dst1 = __lasx_xvadd_h(dst1, dc);
- dst2 = __lasx_xvadd_h(dst2, dc);
- dst3 = __lasx_xvadd_h(dst3, dc);
- DUP4_ARG1(__lasx_xvclip255_h, dst0, dst1, dst2, dst3,
- dst0, dst1, dst2, dst3);
- DUP2_ARG2(__lasx_xvpickev_b, dst1, dst0, dst3, dst2, dst0, dst1);
- __lasx_xvstelm_d(dst0, dst, 0, 0);
- __lasx_xvstelm_d(dst0, dst + dst_stride, 0, 2);
- __lasx_xvstelm_d(dst0, dst + dst_stride_2x, 0, 1);
- __lasx_xvstelm_d(dst0, dst + dst_stride_3x, 0, 3);
- dst += dst_stride_4x;
- __lasx_xvstelm_d(dst1, dst, 0, 0);
- __lasx_xvstelm_d(dst1, dst + dst_stride, 0, 2);
- __lasx_xvstelm_d(dst1, dst + dst_stride_2x, 0, 1);
- __lasx_xvstelm_d(dst1, dst + dst_stride_3x, 0, 3);
-}
-
-void ff_h264_idct_add16_lasx(uint8_t *dst,
- const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nzc[15 * 8])
-{
- int32_t i;
-
- for (i = 0; i < 16; i++) {
- int32_t nnz = nzc[scan8[i]];
-
- if (nnz) {
- if (nnz == 1 && ((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else
- ff_h264_idct_add_lasx(dst + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
- }
-}
-
-void ff_h264_idct8_add4_lasx(uint8_t *dst, const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nzc[15 * 8])
-{
- int32_t cnt;
-
- for (cnt = 0; cnt < 16; cnt += 4) {
- int32_t nnz = nzc[scan8[cnt]];
-
- if (nnz) {
- if (nnz == 1 && ((dctcoef *) block)[cnt * 16])
- ff_h264_idct8_dc_addblk_lasx(dst + blk_offset[cnt],
- block + cnt * 16 * sizeof(pixel),
- dst_stride);
- else
- ff_h264_idct8_addblk_lasx(dst + blk_offset[cnt],
- block + cnt * 16 * sizeof(pixel),
- dst_stride);
- }
- }
-}
-
-
-void ff_h264_idct_add8_lasx(uint8_t **dst,
- const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nzc[15 * 8])
-{
- int32_t i;
-
- for (i = 16; i < 20; i++) {
- if (nzc[scan8[i]])
- ff_h264_idct_add_lasx(dst[0] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[0] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
- for (i = 32; i < 36; i++) {
- if (nzc[scan8[i]])
- ff_h264_idct_add_lasx(dst[1] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[1] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
-}
-
-void ff_h264_idct_add8_422_lasx(uint8_t **dst,
- const int32_t *blk_offset,
- int16_t *block, int32_t dst_stride,
- const uint8_t nzc[15 * 8])
-{
- int32_t i;
-
- for (i = 16; i < 20; i++) {
- if (nzc[scan8[i]])
- ff_h264_idct_add_lasx(dst[0] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[0] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
- for (i = 32; i < 36; i++) {
- if (nzc[scan8[i]])
- ff_h264_idct_add_lasx(dst[1] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[1] + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
- for (i = 20; i < 24; i++) {
- if (nzc[scan8[i + 4]])
- ff_h264_idct_add_lasx(dst[0] + blk_offset[i + 4],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[0] + blk_offset[i + 4],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
- for (i = 36; i < 40; i++) {
- if (nzc[scan8[i + 4]])
- ff_h264_idct_add_lasx(dst[1] + blk_offset[i + 4],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst[1] + blk_offset[i + 4],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
-}
-
-void ff_h264_idct_add16_intra_lasx(uint8_t *dst,
- const int32_t *blk_offset,
- int16_t *block,
- int32_t dst_stride,
- const uint8_t nzc[15 * 8])
-{
- int32_t i;
-
- for (i = 0; i < 16; i++) {
- if (nzc[scan8[i]])
- ff_h264_idct_add_lasx(dst + blk_offset[i],
- block + i * 16 * sizeof(pixel), dst_stride);
- else if (((dctcoef *) block)[i * 16])
- ff_h264_idct4x4_addblk_dc_lasx(dst + blk_offset[i],
- block + i * 16 * sizeof(pixel),
- dst_stride);
- }
-}
-
-void ff_h264_deq_idct_luma_dc_lasx(int16_t *dst, int16_t *src,
- int32_t de_qval)
-{
-#define DC_DEST_STRIDE 16
-
- __m256i src0, src1, src2, src3;
- __m256i vec0, vec1, vec2, vec3;
- __m256i tmp0, tmp1, tmp2, tmp3;
- __m256i hres0, hres1, hres2, hres3;
- __m256i vres0, vres1, vres2, vres3;
- __m256i de_q_vec = __lasx_xvreplgr2vr_w(de_qval);
-
- DUP4_ARG2(__lasx_xvld, src, 0, src, 8, src, 16, src, 24,
- src0, src1, src2, src3);
- LASX_TRANSPOSE4x4_H(src0, src1, src2, src3, tmp0, tmp1, tmp2, tmp3);
- LASX_BUTTERFLY_4_H(tmp0, tmp2, tmp3, tmp1, vec0, vec3, vec2, vec1);
- LASX_BUTTERFLY_4_H(vec0, vec1, vec2, vec3, hres0, hres3, hres2, hres1);
- LASX_TRANSPOSE4x4_H(hres0, hres1, hres2, hres3,
- hres0, hres1, hres2, hres3);
- LASX_BUTTERFLY_4_H(hres0, hres1, hres3, hres2, vec0, vec3, vec2, vec1);
- LASX_BUTTERFLY_4_H(vec0, vec1, vec2, vec3, vres0, vres1, vres2, vres3);
- DUP4_ARG1(__lasx_vext2xv_w_h, vres0, vres1, vres2, vres3,
- vres0, vres1, vres2, vres3);
- DUP2_ARG3(__lasx_xvpermi_q, vres1, vres0, 0x20, vres3, vres2, 0x20,
- vres0, vres1);
-
- vres0 = __lasx_xvmul_w(vres0, de_q_vec);
- vres1 = __lasx_xvmul_w(vres1, de_q_vec);
-
- vres0 = __lasx_xvsrari_w(vres0, 8);
- vres1 = __lasx_xvsrari_w(vres1, 8);
- vec0 = __lasx_xvpickev_h(vres1, vres0);
- vec0 = __lasx_xvpermi_d(vec0, 0xd8);
- __lasx_xvstelm_h(vec0, dst + 0 * DC_DEST_STRIDE, 0, 0);
- __lasx_xvstelm_h(vec0, dst + 2 * DC_DEST_STRIDE, 0, 1);
- __lasx_xvstelm_h(vec0, dst + 8 * DC_DEST_STRIDE, 0, 2);
- __lasx_xvstelm_h(vec0, dst + 10 * DC_DEST_STRIDE, 0, 3);
- __lasx_xvstelm_h(vec0, dst + 1 * DC_DEST_STRIDE, 0, 4);
- __lasx_xvstelm_h(vec0, dst + 3 * DC_DEST_STRIDE, 0, 5);
- __lasx_xvstelm_h(vec0, dst + 9 * DC_DEST_STRIDE, 0, 6);
- __lasx_xvstelm_h(vec0, dst + 11 * DC_DEST_STRIDE, 0, 7);
- __lasx_xvstelm_h(vec0, dst + 4 * DC_DEST_STRIDE, 0, 8);
- __lasx_xvstelm_h(vec0, dst + 6 * DC_DEST_STRIDE, 0, 9);
- __lasx_xvstelm_h(vec0, dst + 12 * DC_DEST_STRIDE, 0, 10);
- __lasx_xvstelm_h(vec0, dst + 14 * DC_DEST_STRIDE, 0, 11);
- __lasx_xvstelm_h(vec0, dst + 5 * DC_DEST_STRIDE, 0, 12);
- __lasx_xvstelm_h(vec0, dst + 7 * DC_DEST_STRIDE, 0, 13);
- __lasx_xvstelm_h(vec0, dst + 13 * DC_DEST_STRIDE, 0, 14);
- __lasx_xvstelm_h(vec0, dst + 15 * DC_DEST_STRIDE, 0, 15);
-
-#undef DC_DEST_STRIDE
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Barbie Dreamhouse Adventures MOD APK 2022 Create Your Own Dream House with Unlocked VIP.md b/spaces/congsaPfin/Manga-OCR/logs/Barbie Dreamhouse Adventures MOD APK 2022 Create Your Own Dream House with Unlocked VIP.md
deleted file mode 100644
index 63c9682baf3236fce05e488dceef44192dcb5efd..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Barbie Dreamhouse Adventures MOD APK 2022 Create Your Own Dream House with Unlocked VIP.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-Barbie Dreamhouse Adventures Mod APK 2022: A Fun and Creative Game for Barbie Lovers
- If you are a fan of Barbie and her fabulous lifestyle, you will love playing Barbie Dreamhouse Adventures. This is a simulation game where you can design your own dream house and live an exciting life with Barbie and her friends. You can customize your characters, explore different rooms, go on adventures, and more. But what if you want to enjoy more features and content in the game? Well, you can do that by downloading the Mod APK version of Barbie Dreamhouse Adventures. This is a modified version of the game that offers an unlocked VIP feature, which gives you access to exclusive content, items, and special features that are not available in the original game. In this article, we will tell you everything you need to know about Barbie Dreamhouse Adventures Mod APK 2022, including how to download and install it on your Android device, why you should play it, and what are its main features.
-barbie dreamhouse adventures apk mod 2022 Download Zip ····· https://urlca.com/2uO6qR
- What is Barbie Dreamhouse Adventures?
- A simulation game where you can design your own dream house and live an exciting life with Barbie and her friends
- Barbie Dreamhouse Adventures is a popular game with millions of fans worldwide, where players get to create their dream house and live an exciting life with Barbie. You can choose from hundreds of design options to decorate your house, from wallpapers and furniture to accessories and pets. You can also customize your characters' outfits, hairstyles, makeup, and accessories. You can explore different rooms in your house, such as the kitchen, the living room, the bedroom, the bathroom, and more. You can also interact with Barbie's friends, such as Ken, Teresa, Nikki, Renee, Daisy, and others. You can go on fun adventures with them, such as baking cupcakes, going to the beach, having a pool party, going to the mall, and more. You can also take photos and videos of your creations and share them with other players online.
- A game with many features, such as customizing your characters, exploring different rooms, going on adventures, and more
- Barbie Dreamhouse Adventures is a game that offers many features for players to enjoy. Some of these features are:
-
-Customizing your characters: You can choose from various options to customize your characters' appearance, such as skin tone, eye color, hair color, hair style, makeup, clothes, shoes, accessories, and more. You can also unlock new outfits and items as you progress in the game.
-Exploring different rooms: You can explore different rooms in your house, such as the kitchen, the living room, the bedroom, the bathroom, and more. You can also decorate them with various items and accessories that suit your style. You can also interact with different objects in each room, such as cooking appliances, musical instruments, books, toys, etc. Going on adventures: You can go on fun adventures with Barbie and her friends, such as baking cupcakes, going to the beach, having a pool party, going to the mall, and more. You can also play mini-games and complete challenges to earn rewards and unlock new items. You can also discover new places and meet new characters along the way.
-Sharing your creations: You can take photos and videos of your dream house and your characters and share them with other players online. You can also view other players' creations and rate them. You can also chat with other players and make new friends.
-
-Barbie Dreamhouse Adventures is a game that lets you unleash your creativity and imagination and have fun with Barbie and her friends.
- What is the Mod APK version of Barbie Dreamhouse Adventures?
- A modified version of the game that offers an unlocked VIP feature
- Barbie Dreamhouse Adventures Mod APK 2022 is a modified version of the game that offers an unlocked VIP feature. This means that you can enjoy all the benefits and advantages of being a VIP member without paying any money. The VIP feature gives you access to exclusive content, items, and special features that are not available in the original game. For example, you can get unlimited coins and gems, which you can use to buy more items and outfits. You can also get unlimited energy, which you can use to play more adventures and mini-games. You can also get unlimited access to all the rooms in your house, which you can decorate as you wish. You can also get unlimited access to all the characters, outfits, accessories, pets, and more.
- A feature that gives you access to exclusive content, items, and special features that are not available in the original game
- The VIP feature of Barbie Dreamhouse Adventures Mod APK 2022 gives you access to exclusive content, items, and special features that are not available in the original game. Some of these are:
-barbie dreamhouse adventures mod apk 2023.4.1 (unlocked vip)
-barbie dreamhouse adventures hack apk 2022 (unlimited money)
-barbie dreamhouse adventures apk mod download for android
-barbie dreamhouse adventures mod apk latest version 2022
-barbie dreamhouse adventures mod apk free shopping
-barbie dreamhouse adventures mod apk unlimited gems and coins
-barbie dreamhouse adventures mod apk revdl
-barbie dreamhouse adventures mod apk rexdl
-barbie dreamhouse adventures mod apk happymod
-barbie dreamhouse adventures mod apk android 1
-barbie dreamhouse adventures mod apk obb
-barbie dreamhouse adventures mod apk offline
-barbie dreamhouse adventures mod apk no root
-barbie dreamhouse adventures mod apk vip unlocked
-barbie dreamhouse adventures mod apk all episodes unlocked
-barbie dreamhouse adventures mod apk full version
-barbie dreamhouse adventures mod apk premium
-barbie dreamhouse adventures mod apk pro
-barbie dreamhouse adventures mod apk pure
-barbie dreamhouse adventures mod apk unlimited everything
-barbie dreamhouse adventures mod apk 2022 update
-barbie dreamhouse adventures mod apk new version 2022
-barbie dreamhouse adventures mod apk old version 2022
-barbie dreamhouse adventures mod apk original
-barbie dreamhouse adventures mod apk online
-how to download barbie dreamhouse adventures mod apk 2022
-how to install barbie dreamhouse adventures mod apk 2022
-how to play barbie dreamhouse adventures mod apk 2022
-how to get barbie dreamhouse adventures mod apk 2022
-how to update barbie dreamhouse adventures mod apk 2022
-download game barbie dreamhouse adventures mod apk 2022
-download game android barbie dreamhouse adventures mod apk 2022
-download game offline barbie dreamhouse adventures mod apk 2022
-download game online barbie dreamhouse adventures mod apk 2022
-download game gratis barbie dreamhouse adventures mod apk 2022
-download game terbaru barbie dreamhouse adventures mod apk 2022
-download game terbaik barbie dreamhouse adventures mod apk 2022
-download game seru barbie dreamhouse adventures mod apk 2022
-download game lucu barbie dreamhouse adventures mod apk 2022
-download game asik barbie dreamhouse adventures mod apk 2022
-
-Exclusive rooms: You can access exclusive rooms in your house, such as the spa, the cinema, the rooftop terrace, and more. You can also decorate them with exclusive items and accessories.
-Exclusive outfits: You can access exclusive outfits for your characters, such as dresses, tops, bottoms, shoes, accessories, and more. You can also mix and match them to create your own unique style.
-Exclusive pets: You can access exclusive pets for your characters, such as dogs, cats, horses, unicorns, and more. You can also play with them and take care of them.
-Exclusive adventures: You can access exclusive adventures with Barbie and her friends, such as going to Paris, New York, London, Tokyo, and more. You can also explore new places and meet new characters.
-Exclusive mini-games: You can access exclusive mini-games with Barbie and her friends, such as fashion shows, dance contests, karaoke nights, cooking competitions, and more. You can also win prizes and rewards.
-
-The VIP feature of Barbie Dreamhouse Adventures Mod APK 2022 gives you a premium gaming experience that you will not find in the original game.
How to download and install Barbie Dreamhouse Adventures Mod APK 2022?
- A step-by-step guide on how to download and install the mod apk file on your Android device
- If you want to play Barbie Dreamhouse Adventures Mod APK 2022, you need to download and install the mod apk file on your Android device. Here is a step-by-step guide on how to do that:
-
-First, you need to enable the installation of apps from unknown sources on your device. To do that, go to Settings > Security > Unknown Sources and toggle it on.
-Next, you need to download the mod apk file from a reliable source. You can use the link below to download the latest version of Barbie Dreamhouse Adventures Mod APK 2022.
-After downloading the file, locate it in your device's file manager and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.
-Once the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Barbie Dreamhouse Adventures Mod APK 2022.
-
-Note: Before installing the mod apk file, make sure you have enough storage space on your device and a stable internet connection. Also, make sure you uninstall any previous versions of Barbie Dreamhouse Adventures from your device to avoid any conflicts.
- A table that shows the requirements and specifications of the mod apk file
- Here is a table that shows the requirements and specifications of Barbie Dreamhouse Adventures Mod APK 2022:
-
-
-Requirement/Specification
-Value
-
-
-App name
-Barbie Dreamhouse Adventures Mod APK 2022
-
-
-App size
-40 MB
-
-
-App version
-14.0.1
-
-
-Mod features
-Unlocked VIP feature, unlimited coins, gems, energy, access to all rooms, characters, outfits, pets, adventures, mini-games, etc.
-
-
-Android version required
-4.4 and up
-
-
-Last updated
-June 2022
-
-
-Developer
-Budge Studios
-
-
-Download link
-Barbie Dreamhouse Adventures Mod APK 2022 Download
- Why should you play Barbie Dreamhouse Adventures Mod APK 2022?
- A list of benefits and advantages of playing the mod apk version of the game
- Barbie Dreamhouse Adventures Mod APK 2022 is a game that offers many benefits and advantages for players who want to have more fun and creativity in the game. Some of these benefits and advantages are:
-
-You can enjoy the VIP feature for free, which gives you access to exclusive content, items, and special features that are not available in the original game.
-You can get unlimited coins and gems, which you can use to buy more items and outfits for your characters and your house.
-You can get unlimited energy, which you can use to play more adventures and mini-games with Barbie and her friends.
-You can get unlimited access to all the rooms in your house, which you can decorate as you wish with various items and accessories.
-You can get unlimited access to all the characters, outfits, accessories, pets, adventures, mini-games, and more in the game.
-You can enjoy a premium gaming experience that is more fun, creative, and exciting than the original game.
-
-Barbie Dreamhouse Adventures Mod APK 2022 is a game that gives you more value for your time and money.
- A summary of the main features and highlights of the game
- Barbie Dreamhouse Adventures Mod APK 2022 is a game that has many features and highlights that make it a fun and creative game for Barbie lovers. Some of these features and highlights are:
-
-A simulation game where you can design your own dream house and live an exciting life with Barbie and her friends.
-A game where you can customize your characters' appearance, outfits, hairstyles, makeup, and accessories.
-A game where you can explore different rooms in your house, such as the kitchen, the living room, the bedroom, the bathroom, and more.
-A game where you can go on fun adventures with Barbie and her friends, such as baking cupcakes, going to the beach, having a pool party, going to the mall, and more.
-A game where you can play mini-games and complete challenges to earn rewards and unlock new items.
-A game where you can take photos and videos of your creations and share them with other players online.
-A game where you can enjoy the VIP feature for free, which gives you access to exclusive content, items, and special features that are not available in the original game.
-
-Barbie Dreamhouse Adventures Mod APK 2022 is a game that lets you unleash your creativity and imagination and have fun with Barbie and her friends.
- Conclusion
- A brief recap of what the article has covered
- In this article, we have covered everything you need to know about Barbie Dreamhouse Adventures Mod APK 2022. We have explained what Barbie Dreamhouse Adventures is, what the Mod APK version of the game is, how to download and install it on your Android device, why you should play it, and what are its main features. We have also provided a table that shows the requirements and specifications of the mod apk file. We hope that this article has been helpful and informative for you.
- A call to action for the readers to try out the game and share their feedback
- If you are a fan of Barbie and her fabulous lifestyle, you should definitely try out Barbie Dreamhouse Adventures Mod APK 2022. This is a fun and creative game that will keep you entertained for hours. You can design your own dream house and live an exciting life with Barbie and her friends. You can also enjoy the VIP feature for free, which gives you access to exclusive content, items, and special features that are not available in the original game. You can also share your creations with other players online and make new friends. So what are you waiting for? Download Barbie Dreamhouse Adventures Mod APK 2022 today and start playing. And don't forget to share your feedback with us in the comments section below. We would love to hear from you.
- FAQs
- Five unique questions and answers related to the topic of the article
- Here are some frequently asked questions (FAQs) related to the topic of the article:
-
-Is Barbie Dreamhouse Adventures Mod APK 2022 safe to download and install? Yes, Barbie Dreamhouse Adventures Mod APK 2022 is safe to download and install on your Android device. However, make sure you download it from a reliable source like the link we have provided in this article. Also, make sure you enable the installation of apps from unknown sources on your device before installing the mod apk file.
-What are the differences between Barbie Dreamhouse Adventures and Barbie Dreamhouse Adventures Mod APK 2022? The main difference between Barbie Dreamhouse Adventures and Barbie Dreamhouse Adventures Mod APK 2022 is that the mod apk version offers an unlocked VIP feature, which gives you access to exclusive content, items, and special features that are not available in the original game. For example, you can get unlimited coins, gems, energy, access to all rooms, characters, outfits, pets, adventures, mini-games, and more in the mod apk version.
-Do I need to root my device to play Barbie Dreamhouse Adventures Mod APK 2022? No, you do not need to root your device to play Barbie Dreamhouse Adventures Mod APK 2022. You can play the game without any root access or permissions on your device.
-Can I play Barbie Dreamhouse Adventures Mod APK 2022 offline? Yes, you can play Barbie Dreamhouse Adventures Mod APK 2022 offline. However, some features and functions may require an internet connection to work properly. For example, you may need an internet connection to share your creations with other players online or to download new updates and content for the game.
-Can I play Barbie Dreamhouse Adventures Mod APK 2022 on other devices besides Android? No, Barbie Dreamhouse Adventures Mod APK 2022 is only compatible with Android devices. You cannot play the game on other devices such as iOS, Windows, Mac, etc.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/CarX Street Mod APK for iOS How to Unlock All Cars and Features.md b/spaces/congsaPfin/Manga-OCR/logs/CarX Street Mod APK for iOS How to Unlock All Cars and Features.md
deleted file mode 100644
index a5538f1fd59e37e7cab802190be0de75f79423c7..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/CarX Street Mod APK for iOS How to Unlock All Cars and Features.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-CarX Street Download Mod APK iOS: How to Play the Ultimate Street Racing Game on Your iPhone
- If you are a fan of realistic car physics, high-speed drifting and open-world exploration, you might want to check out CarX Street, a simulation racing video game that offers all that and more. CarX Street is developed by CarX Technologies, LLC, the same company behind the popular CarX Drift Racing series. The game is currently available on iOS devices in some regions, and it will soon launch on PC and consoles as well.
- In this article, we will tell you everything you need to know about CarX Street download mod apk iOS, including its features, gameplay, tips and tricks, and how to install it on your iPhone. We will also answer some frequently asked questions about the game. Let's get started!
-carx street download mod apk ios Download File ——— https://urlca.com/2uOaLs
- What is CarX Street?
- CarX Street is an open-world street racing game that lets you get behind the wheel and explore a large city and its surroundings, from busy city streets to spiral mountain roads and mesmerizing coastal highways. You can choose from a variety of cars, from classic muscle cars to modern supercars, and customize them with detailed tuning options. You can also compete against other players in real network races or join clubs and challenge bosses.
- What are the features of CarX Street?
- CarX Street has many features that make it an exciting and immersive racing game. Here are some of them:
-
-Realistic car physics : CarX Street uses the CarX Technology engine, which simulates the behavior of cars on the road, giving you a true-to-life racing experience. You can feel the thrill of high-speed racing as you maneuver your car through tight turns and weave in and out of traffic.
-Open-world exploration : CarX Street has a large and diverse map that you can explore freely. You can drive around the city and discover hidden locations, shortcuts and secrets. You can also enjoy the dynamic day/night cycle and weather effects that change the atmosphere of the game.
-Car customization : CarX Street allows you to build the car of your dreams with part tuning that unlocks all the potential of car physics. You can swap parts and trick out your car for a specific race. You can also customize the appearance of your car with various options such as mirrors, headlights, lights, skirts, bumpers, rims and more.
-Multiplayer mode : CarX Street lets you race against other players online in real network races. You can join or create rooms with different settings such as race mode, track, time limit and number of players. You can also chat with other racers and show off your skills.
-Career mode : CarX Street has a career mode where you can join clubs, defeat bosses and prove yourself as the best driver in the city. You can also buy houses for your cars and collect rewards for completing races and events.
-
- How to download CarX Street mod apk iOS?
- If you want to play CarX Street on your iPhone, you will need to download the mod apk file from a reliable source. A mod apk file is a modified version of the original game file that allows you to access features that are not available in the official version. For example, you can get unlimited money, unlock all cars and parts, remove ads and more.
- However, downloading a mod apk file comes with some risks. You may encounter malware or viruses that can harm your device or compromise your personal data. You may also violate the terms of service of the game developer or Apple and face legal consequences. Therefore, we do not recommend downloading or using any mod apk files for CarX Street or any other game.
- If you still want to try it at your own risk, here are the steps you need to follow:
-
-Find a reputable website that offers CarX Street mod apk iOS file. Make sure to read the reviews and ratings of other users before downloading anything.
-Download the mod apk file to your computer or laptop.
-Connect your iPhone to your computer or laptop using a USB cable.
-Open iTunes on your computer or laptop and select your iPhone from the device list.
-Go to the Apps section and drag and drop the mod apk file to the app list.
-Sync your iPhone with iTunes and wait for the installation to complete.
-Launch CarX Street on your iPhone and enjoy the game with the mod features.
-
- How to play CarX Street?
- CarX Street is easy to play but hard to master. You will need to learn how to control your car, drift, overtake and avoid obstacles. Here are some tips and tricks to help you improve your skills:
-
-Choose the right car : CarX Street has a wide range of cars, each with different characteristics such as speed, acceleration, handling, braking and drift. You should choose a car that suits your driving style and the track you are racing on. For example, if you are racing on a curvy road, you might want a car with good handling and drift. If you are racing on a straight road, you might want a car with high speed and acceleration.
-Customize your car : CarX Street allows you to tune your car for optimal performance. You can adjust various parameters such as engine power, suspension stiffness, tire pressure, camber angle, brake balance and more. You can also change the color, vinyls, stickers and decals of your car. Experiment with different combinations and find the best setup for your car.
-Use the controls wisely : CarX Street has two control modes: tilt and touch. You can choose the one that you prefer in the settings menu. You can also customize the sensitivity, position and size of the buttons. The basic controls are: gas pedal, brake pedal, handbrake, steering wheel and nitro boost. You should use them carefully and timely to control your car. For example, you can use the handbrake to initiate a drift, the nitro boost to gain speed and the brake pedal to slow down or stop.
-Practice drifting : Drifting is an essential skill in CarX Street. It allows you to take sharp turns without losing speed or control. It also gives you bonus points and fills up your nitro meter. To drift, you need to press the handbrake while turning the steering wheel in the direction of the turn. You need to balance the gas pedal and the steering wheel to maintain the drift angle and speed. You can also use the brake pedal or the nitro boost to adjust your drift.
-Explore the map : CarX Street has a large map that you can explore freely. You can find hidden locations, shortcuts and secrets that can give you an advantage in races or events. You can also enjoy the scenery and discover new places. You can use the map icon on the top left corner of the screen to see your location and nearby points of interest.
-
- How to compare CarX Street with other racing games?
- CarX Street is not the only racing game available on iOS devices. There are many other games that offer similar or different features and gameplay. Here is a table that compares CarX Street with some of the most popular racing games on iOS:
-carx street racing mod apk ios download
-download carx street mod unlimited money ios
-carx street hack mod apk download for ios
-how to download carx street mod apk on ios
-carx street mod apk free download ios
-carx street latest mod apk download ios
-carx street mod apk ios no jailbreak download
-carx street mod apk ios offline download
-carx street drift racing mod apk ios download
-carx street online mod apk download ios
-carx street mod apk ios 2023 download
-download carx street mod apk ios without verification
-carx street mod menu apk download ios
-carx street premium mod apk download ios
-carx street unlocked mod apk download ios
-carx street mod apk ios full version download
-carx street mega mod apk download ios
-carx street pro mod apk download ios
-carx street cracked mod apk download ios
-carx street vip mod apk download ios
-carx street unlimited coins mod apk download ios
-carx street all cars unlocked mod apk download ios
-carx street realistic graphics mod apk download ios
-carx street high speed mod apk download ios
-carx street multiplayer mod apk download ios
-carx street custom cars mod apk download ios
-carx street new update mod apk download ios
-carx street best cars mod apk download ios
-carx street easy controls mod apk download ios
-carx street cheats mod apk download ios
-carx street hack tool mod apk download ios
-carx street no ads mod apk download ios
-carx street unlimited nitro mod apk download ios
-carx street hd graphics mod apk download ios
-carx street smooth gameplay mod apk download ios
-carx street tips and tricks mod apk download ios
-carx street fun and addictive mod apk download ios
-carx street realistic physics mod apk download ios
-carx street awesome sound effects mod apk download ios
-carx street different modes and challenges mod apk download ios
-
-
-Game
-Features
-Gameplay
-
-
-CarX Street
-- Realistic car physics - Open-world exploration - Car customization - Multiplayer mode - Career mode
-- Simulation racing - Drifting - Network races - Club challenges - Boss battles
-
-
-Asphalt 9: Legends
-- Stunning graphics - Licensed cars - Arcade mode - Online mode - Club system
-- Arcade racing - Stunts - Nitro boost - Career mode - Events mode
-
-
-Real Racing 3
-- Realistic graphics - Licensed cars - Real tracks - Time Shifted Multiplayer - Motorsports mode
-- Simulation racing - Racing line - Pit stops - Career mode - Motorsports mode
-
-
-Need for Speed: No Limits
-- Action-packed graphics - Customizable cars - Underground mode - Online mode - Blackridge Rivals mode
-- Arcade racing - Cops and robbers - Nitro boost - Campaign mode - Events mode
-
-
- As you can see, each game has its own strengths and weaknesses. You can choose the one that suits your preferences and expectations. However, if you are looking for a realistic and immersive street racing game with open-world exploration and car customization, CarX Street might be the best option for you.
- Conclusion
- CarX Street is a simulation racing video game that offers realistic car physics, open-world exploration, car customization, multiplayer mode and career mode. It is currently available on iOS devices in some regions, and it will soon launch on PC and consoles as well. If you want to play CarX Street on your iPhone, you can download the mod apk file from a reliable source, but be aware of the risks involved. Alternatively, you can wait for the official release of the game and enjoy it without any modifications. CarX Street is a game that will appeal to fans of street racing, drifting and car tuning. If you are one of them, you should definitely give it a try!
- FAQs
- Q: How can I get more money in CarX Street?
-A: You can get more money in CarX Street by completing races and events, joining clubs, defeating bosses, buying houses and collecting rewards. You can also watch ads or buy in-app purchases to get more money.
- Q: How can I unlock more cars in CarX Street?
-A: You can unlock more cars in CarX Street by progressing through the career mode, joining clubs, defeating bosses and buying houses. You can also buy cars with real money or use mod apk files to unlock all cars.
- Q: How can I update CarX Street?
-A: You can update CarX Street by going to the App Store and checking for updates. You can also enable automatic updates in the settings menu. If you are using a mod apk file, you may need to download a new version of the file and install it again.
- Q: How can I contact CarX Technologies?
-A: You can contact CarX Technologies by visiting their official website or social media pages. You can also send them an email at support@carx-tech.com or use the feedback form in the game.
- Q: Is CarX Street compatible with my device?
-A: CarX Street is compatible with iOS devices that have iOS 11.0 or later installed. You can check the compatibility of your device by going to the App Store and reading the description of the game.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Aether 2 All Version and Experience the Sequel to the Popular Dimension Mod.md b/spaces/congsaPfin/Manga-OCR/logs/Download Aether 2 All Version and Experience the Sequel to the Popular Dimension Mod.md
deleted file mode 100644
index 61f6e3a927d5b2967a37730f3e456312feb58265..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Aether 2 All Version and Experience the Sequel to the Popular Dimension Mod.md
+++ /dev/null
@@ -1,202 +0,0 @@
-
-How to Download Aether 2 Mod for Minecraft
-If you are looking for a new adventure in Minecraft, you might want to try out the Aether 2 mod. This mod adds a whole new dimension to the game, where you can explore floating islands, fight mythical creatures, and discover ancient secrets. In this article, we will show you how to download and install the Aether 2 mod for Minecraft, as well as how to play it.
- What is Aether 2 Mod?
-The Aether 2 mod is the sequel to the popular Aether mod, which was released in 2011. The Aether 2 mod is a collaborative project by Gilded Games, a team of modders who aim to create a high-quality and immersive dimension mod for Minecraft. The Aether 2 mod is set in a hostile paradise high above the clouds, where you can find new ores, mobs, blocks, and much more. The Aether 2 mod also features three dungeons, each with a unique boss and loot. The Aether 2 mod is currently in beta stage, and the developers are working on adding more content and features in the future.
-download aether 2 all version Download ––– https://urlca.com/2uOc2t
- Features of Aether 2 Mod
-Some of the features that you can expect from the Aether 2 mod are:
-
-A new dimension with floating islands, clouds, and skylands.
-New biomes, such as the Highlands, the Garden, and the Necromancer Tower.
-New mobs, such as Zephyrs, Cockatrices, Valkyries, and Sun Spirits.
-New blocks, such as Skyroot Planks, Holystone Bricks, and Ambrosium Torches.
-New items, such as Gravitite Armor, Phoenix Bow, and Cloud Parachute.
-New crafting materials, such as Zanite Gems, Icestone, and Continuum Orbs.
-New dungeons, such as the Bronze Dungeon, the Silver Dungeon, and the Gold Dungeon.
-New achievements, such as "Aerwhale Rider", "Slider Slayer", and "Sun God".
-A custom user interface with a new inventory system and a party system.
-A custom soundtrack with original music composed by Emile van Krieken.
-
- Requirements for Aether 2 Mod
-To run the Aether 2 mod, you will need:
-
-Minecraft version 1.7.10 or higher.
-Forge version 10.13.4.1614 or higher.
-A minimum of 4 GB of RAM allocated to Minecraft.
-A stable internet connection (optional but recommended).
-
- How to Install Aether 2 Mod
-Installing the Aether 2 mod is not very difficult, but it does require some steps. Here is how you can do it:
- Download and Install Forge
-Forge is a modding API that allows you to run multiple mods on Minecraft. You will need Forge to run the Aether 2 mod. To download and install Forge, follow these steps:
-
-Go to the official Forge website and download the installer for your Minecraft version.
-Run the installer and select "Install client".
-Select your Minecraft directory and click "OK".
-Wait for the installation to finish and click "OK".
-
-You have now installed Forge on your Minecraft client. You can check if it works by launching Minecraft and selecting the "Forge" profile.
- Download Aether 2 Mod File
-The next step is to download the Aether 2 mod file from the official website. To do this, follow these steps:
-How to download aether 2 mod for minecraft
-Aether 2 mod download latest version
-Download aether 2 for minecraft 1.12.2
-Aether 2 modpack download
-Download aether 2 launcher
-Aether 2 download curseforge
-Aether 2 free download pc
-Download aether 2 server files
-Aether 2 download mac
-Aether 2 mod download 1.7.10
-Aether 2 download apk
-Aether 2 mod download android
-Download aether 2 texture pack
-Aether 2 download windows 10
-Aether 2 mod download forge
-Download aether 2 resource pack
-Aether 2 download zip
-Aether 2 mod download pe
-Download aether 2 map
-Aether 2 download linux
-Aether 2 mod download java edition
-Download aether 2 shaders
-Aether 2 download ios
-Aether 2 mod download bedrock edition
-Download aether 2 music
-Aether 2 download no forge
-Aether 2 mod download xbox one
-Download aether 2 sounds
-Aether 2 download ps4
-Aether 2 mod download optifine
-Download aether 2 wiki
-Aether 2 download without launcher
-Aether 2 mod download ps3
-Download aether 2 guide
-Aether 2 download with forge
-Aether 2 mod download xbox 360
-Download aether 2 skins
-Aether 2 download without curseforge
-Aether 2 mod download switch
-Download aether 2 seeds
-Aether 2 download with optifine
-Aether 2 mod download wii u
-Download aether 2 commands
-Aether 2 download with shaders
-Aether 2 mod download vr
-Download aether 2 recipes
-Aether 2 download with texture pack
-Aether 2 mod download cracked minecraft
-
-Go to the official Aether 2 website and click on "Download".
-Select the version of the mod that matches your Minecraft version and click on "Download".
-Wait for the download to finish and save the file to a convenient location.
-
-You have now downloaded the Aether 2 mod file. It should be a .jar file with a name like "aether-1.7.10-1.6.jar".
- Copy Aether 2 Mod File to Mods Folder
-The final step is to copy the Aether 2 mod file to the mods folder in your Minecraft directory. To do this, follow these steps:
-
-Open your Minecraft directory. You can find it by typing "%appdata%\.minecraft" in the Windows search bar or by following this path: C:\Users\YourName\AppData\Roaming\.minecraft.
-Open the "mods" folder. If you don't have one, create one by right-clicking and selecting "New > Folder".
-Copy and paste the Aether 2 mod file that you downloaded into the mods folder.
-
-You have now installed the Aether 2 mod on your Minecraft client. You can check if it works by launching Minecraft and selecting the "Forge" profile.
- How to Play Aether 2 Mod
-Now that you have installed the Aether 2 mod, you are ready to play it. Here are some tips on how to enjoy the mod:
- Create a Glowstone Portal
-To enter the Aether dimension, you will need to create a glowstone portal. This is similar to creating a nether portal, but with glowstone blocks instead of obsidian blocks. To create a glowstone portal, follow these steps:
-
-Gather at least 10 glowstone blocks. You can find them in the nether or craft them from glowstone dust.
-Build a rectangular frame with glowstone blocks that is at least 4 blocks wide and 5 blocks tall.
-Activate the portal by using a bucket of water on one of the inner blocks of the frame.
-
-You have now created a glowstone portal. You can enter it by walking into it.
- Enter the Aether Dimension
-Once you enter the glowstone portal, you will be transported to the Aether dimension. This is a sky-themed dimension with floating islands, clouds, and skylands. You will also see a custom user interface with a new inventory system and a party system. The Aether dimension has different biomes, such as the Highlands, the Garden, and the Necromancer Tower. Each biome has its own terrain, vegetation, mobs, and dungeons. You can explore the Aether dimension by flying on aerwhales, riding on moas, or using cloud parachutes.
- Explore the Floating Islands
-The floating islands are the main feature of the Aether dimension. They are composed of various blocks, such as skyroot planks, holystone bricks, and ambrosium torches. You can mine these blocks with normal tools or with special tools made from zanite gems or gravitite ore. You can also find new ores, such as icestone and continuum orbs, which have unique properties and uses. You can use icestone to freeze water or lava, and continuum orbs to teleport to random locations.
- Fight the Bosses and Dungeons
-The Aether dimension also has three dungeons, each with a unique boss and loot. The dungeons are:
-
-The Bronze Dungeon: This is a labyrinth-like dungeon made of carved stone and mossy holystone. The boss of this dungeon is the Slider, a giant stone cube that slides across the floor and walls. The loot of this dungeon includes bronze keys, which can be used to open treasure chests in other dungeons.
-The Silver Dungeon: This is a tower-like dungeon made of silver blocks and pillars. The boss of this dungeon is the Valkyrie Queen, a powerful warrior that flies and shoots arrows. The loot of this dungeon includes silver keys, which can be used to open locked doors in other dungeons.
-The Gold Dungeon: This is a temple-like dungeon made of gold blocks and statues. The boss of this dungeon is the Sun Spirit, a fiery entity that shoots fireballs and summons minions. The loot of this dungeon includes the Sun Altar, which can be used to create a portal back to the overworld.
-
-To enter a dungeon, you will need to find a dungeon entrance on one of the floating islands. The dungeon entrance is marked by a symbol that matches the color of the dungeon. To access the boss room, you will need to complete a series of puzzles and challenges in the dungeon.
- Craft New Items and Blocks
-The Aether dimension also has new items and blocks that you can craft with the materials you find. Some of the items and blocks are:
-
-
-Item/Block
-Recipe
-Use
-
-
-Skyroot Planks
-4 Skyroot Logs
-A basic building material that can be used to make skyroot tools and other items.
-
-
-Holystone Bricks
-4 Holystone
-A decorative building material that can be used to make holystone tools and other items.
-
-
-Ambrosium Torch
-1 Ambrosium Shard + 1 Skyroot Stick
-A light source that can be placed on any surface.
-
-
-Zanite Gemstone
-Mine Zanite Ore with any pickaxe
-A crafting material that can be used to make zanite tools and armor. Zanite tools and armor have increased durability and efficiency as they wear out.
-
-
-Gravitite Ore
-Mine Gravitite Ore with a zanite pickaxe or higher
-A crafting material that can be used to make gravitite tools and armor. Gravitite tools and armor have the ability to levitate blocks and entities when used.
-
-
-Phoenix Bow
-3 String + 3 Zanite Gemstones
-A weapon that shoots flaming arrows that explode on impact.
-
-
-Cloud Parachute
-4 String + 2 Clouds (any color)
-An item that can be used to glide down from high places. It has unlimited uses and can be dyed with different colors.
-
-
- Conclusion
- The Aether 2 mod is a great way to add some variety and challenge to your Minecraft experience. It offers a new dimension with stunning visuals, unique gameplay, and epic loot. You can download and install the Aether 2 mod by following the steps in this article, and enjoy exploring the skylands with your friends or solo. The Aether 2 mod is still in development, so you can expect more updates and features in the future. If you have any questions or feedback about the mod, you can visit the official website or join the Discord server. Have fun in the Aether!
- FAQs
- Here are some frequently asked questions about the Aether 2 mod:
-
- Is the Aether 2 mod compatible with other mods?
- The Aether 2 mod is compatible with most mods that use Forge, as long as they don't conflict with the Aether 2 mod's features or mechanics. However, some mods may cause issues or crashes, so it is recommended to test them before playing. You can also check the compatibility list on the official website for more information.
- How do I update the Aether 2 mod?
- To update the Aether 2 mod, you will need to download the latest version of the mod file from the official website and replace the old one in your mods folder. You will also need to update Forge if necessary. Make sure to backup your world before updating, as some changes may affect your progress or items.
- How do I uninstall the Aether 2 mod?
- To uninstall the Aether 2 mod, you will need to delete the mod file from your mods folder and remove Forge from your Minecraft client. You will also need to delete any files or folders related to the Aether 2 mod in your Minecraft directory, such as "aether" or "aether.dat". Make sure to backup your world before uninstalling, as you may lose your items or progress in the Aether dimension.
- What are the differences between the Aether 2 mod and the original Aether mod?
- The Aether 2 mod is the sequel to the original Aether mod, which was released in 2011. The Aether 2 mod has many improvements and additions over the original Aether mod, such as:
-
-A new dimension with more biomes, mobs, blocks, and items.
-Three dungeons with unique bosses and loot.
-A custom user interface with a new inventory system and a party system.
-A custom soundtrack with original music composed by Emile van Krieken.
-More compatibility and stability with Forge and other mods.
-More updates and features in the future.
-
- How do I join a party in the Aether 2 mod?
- The party system is a feature of the Aether 2 mod that allows you to team up with other players and share your progress and loot in the Aether dimension. To join a party, you will need to do the following:
-
-Press "P" to open the party menu.
-Click on "Create Party" or "Join Party".
-Enter a party name or select an existing party from the list.
-Invite or accept other players to join your party.
-
-You have now joined a party. You can chat with your party members, see their health and location, and share your achievements and keys in the Aether dimension.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2021 The Best Way to Stay Connected with Your Friends and Family.md b/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2021 The Best Way to Stay Connected with Your Friends and Family.md
deleted file mode 100644
index 193abed8e20d223b59763e729cbf895dd4a7b26f..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download WhatsApp 2021 The Best Way to Stay Connected with Your Friends and Family.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
- H3: WhatsApp Privacy and Security | | H2: How to Download WhatsApp on Your Device | H3: Download WhatsApp for Android H3: Download WhatsApp for iOS H3: Download WhatsApp for Desktop | | H2: How to Update WhatsApp to the Latest Version | H3: Update WhatsApp for Android H3: Update WhatsApp for iOS H3: Update WhatsApp for Desktop | | H2: What's New in WhatsApp 2021 Version | H3: New Messaging and Calling Options H3: New Status and Profile Features H3: New Business and Shopping Features | | H2: Conclusion | | | H2: FAQs | | Table 2: Article with HTML formatting WhatsApp Download 2021 New Version: How to Get the Latest Features and Updates
- If you are looking for a simple, reliable, and private way to communicate with your friends and family, you should consider using WhatsApp. WhatsApp is a free messaging and video calling app that works across mobile and desktop devices, even on slow connections. It has over 2 billion users in more than 180 countries, making it one of the most popular apps in the world.
- In this article, we will show you how to download WhatsApp on your device, how to update it to the latest version, and what new features and updates you can expect from the 2021 version of WhatsApp. Let's get started!
-whatsapp download 2021 new version Download File ⚡ https://urlca.com/2uOfZa
- What is WhatsApp and Why You Should Use It
- WhatsApp is an app that lets you send text messages, voice messages, photos, videos, documents, and other files to anyone who has the app installed on their phone or computer. You can also make voice and video calls with up to 8 people for free, using your phone's Internet service or Wi-Fi. You can also create group chats to stay in touch with your friends and family, share your location, record voice messages, and more.
- WhatsApp Features and Benefits
- Some of the features and benefits of using WhatsApp are:
-
-You can message and call anyone in the world for free, as long as they have WhatsApp installed on their device.
-You can use WhatsApp across different devices, such as your phone, tablet, laptop, or desktop computer.
-You can enjoy high-quality voice and video calls, even on slow or unstable connections.
-You can express yourself with emojis, stickers, GIFs, and custom wallpapers.
-You can share your daily moments through Status, which lets you post text, photos, videos, and GIFs that disappear after 24 hours.
-You can backup your chats and media to Google Drive or iCloud, so you don't lose them if you change your phone or delete the app.
-
- WhatsApp Privacy and Security
- One of the main reasons why people choose WhatsApp is because of its privacy and security features. WhatsApp uses end-to-end encryption, which means that only you and the person you are communicating with can read or listen to your messages and calls. No one else, not even WhatsApp or Meta (the company that owns WhatsApp), can access them.
-whatsapp download 2021 new version free for android
-how to download whatsapp latest version on iphone
-whatsapp update 2021 download apk file
-whatsapp download 2021 new version for pc windows 10
-whatsapp messenger download 2021 new version with video calling
-download whatsapp 2021 new version meta app
-whatsapp download 2021 latest version for tablet
-whatsapp web download 2021 new version for desktop
-whatsapp download 2021 new version without google play
-whatsapp business download 2021 new version for android
-whatsapp download 2021 new version ios 12 or newer
-whatsapp beta download 2021 latest version apk
-whatsapp plus download 2021 new version free
-whatsapp status download 2021 new version app
-whatsapp gb download 2021 latest version for android
-whatsapp stickers download 2021 new version free
-whatsapp mod download 2021 latest version apk
-whatsapp backup download 2021 new version google drive
-whatsapp dark mode download 2021 latest version for android
-whatsapp themes download 2021 new version free
-whatsapp transparent download 2021 latest version apk
-whatsapp wallpaper download 2021 new version hd
-whatsapp voice changer download 2021 latest version for android
-whatsapp emoji download 2021 new version free
-whatsapp spy download 2021 latest version apk
-whatsapp recovery download 2021 new version for android
-whatsapp cleaner download 2021 latest version free
-whatsapp lock download 2021 new version app
-whatsapp clone download 2021 latest version apk
-whatsapp transfer download 2021 new version for pc
-whatsapp video downloader 2021 latest version for android
-whatsapp chatbot download 2021 new version free
-whatsapp scanner download 2021 latest version apk
-whatsapp tracker download 2021 new version app
-whatsapp scheduler download 2021 latest version for android
-whatsapp auto reply download 2021 new version free
-whatsapp group link download 2021 latest version apk
-whatsapp font style download 2021 new version free
-whatsapp notification tone download 2021 latest version for android
-whatsapp profile picture download 2021 new version hd
-whatsapp fingerprint lock download 2021 latest version for android
-whatsapp qr code generator download 2021 new version free
-whatsapp call recorder download 2021 latest version apk
-whatsapp message bomber download 2021 new version app
-whatsapp video editor download 2021 latest version for android
-whatsapp status saver download 2021 new version free
-whatsapp online notifier download 2021 latest version apk
- WhatsApp also gives you control over your privacy settings, such as who can see your last seen, profile photo, status, and online status. You can also block or report any unwanted contacts or messages. You can also request that your data be deleted from WhatsApp's servers at any time.
- How to Download WhatsApp on Your Device
- If you want to start using WhatsApp, you need to download it on your device first. Here are the steps to download WhatsApp for different devices:
- Download WhatsApp for Android
- If you have an Android phone or tablet, you can download WhatsApp from the Google Play Store. Here's how:
-
-Open the Google Play Store app on your device.
-Search for "WhatsApp" or tap on this link.
-Tap on "Install" and wait for the app to download.
-Open the app and follow the instructions to set up your account.
-
- Download WhatsApp for iOS
- If you have an iPhone or iPad, you can download WhatsApp from the App Store. Here's how:
-
-Open the App Store app on your device.
-Search for "WhatsApp" or tap on this link.
-Tap on "Get" and wait for the app to download.
-Open the app and follow the instructions to set up your account.
-
- Download WhatsApp for Desktop
- If you want to use WhatsApp on your laptop or desktop computer, you can download WhatsApp for Windows or Mac from the official website. Here's how:
-
-Go to this link on your browser.
-Select your operating system (Windows or Mac) and click on "Download".
-Run the downloaded file and follow the instructions to install WhatsApp on your computer.
-Open the app and scan the QR code with your phone to link your account.
-
- You can also use WhatsApp Web, which is a browser-based version of WhatsApp that works on any computer. To use WhatsApp Web, go to this link on your browser and scan the QR code with your phone to link your account.
- How to Update WhatsApp to the Latest Version
- To enjoy the latest features and updates of WhatsApp, you need to update it to the latest version regularly. Here are the steps to update WhatsApp for different devices:
- Update WhatsApp for Android
- If you have an Android device, you can update WhatsApp from the Google Play Store. Here's how:
-
-Open the Google Play Store app on your device.
-Tap on the menu icon (three horizontal lines) and select "My apps & games".
-Find "WhatsApp" in the list of apps and tap on "Update".
-Wait for the app to update and open it to enjoy the new features.
-
- Update WhatsApp for iOS
- If you have an iOS device, you can update WhatsApp from the App Store. Here's how:
-
-Open the App Store app on your device.
-Tap on your profile icon (top right corner) and select "Purchased".
-Find "WhatsApp" in the list of apps and tap on "Update".
-Wait for the app to update and open it to enjoy the new features.
-
- Update WhatsApp for Desktop
- If you have a Windows or Mac computer, you can update WhatsApp from the official website. Here's how:
-
-Go to this link on your browser.
-Select your operating system (Windows or Mac) and click on "Download".
-Run the downloaded file and follow the instructions to install the latest version of WhatsApp on your computer.
-Open the app and scan the QR code with your phone to link your account.
-
- You can also update WhatsApp Web by refreshing your browser page or clearing your cache and cookies.
- What's New in WhatsApp 2021 Version
- The 2021 version of WhatsApp comes with some new features and updates that make it more fun and convenient to use. Here are some of them:
- New Messaging and Calling Options
- You can now send disappearing messages that automatically delete after 7 days, giving you more privacy and control over your chats. You can also send view-once photos and videos that disappear after they are opened, making them perfect for sharing sensitive or personal content. You can also join group calls that are already in progress, so you don't miss out on any conversation. You can also mute video calls if you don't want to be heard by others.
- New Status and Profile Features
- You can now customize your status with different fonts, colors, backgrounds, and stickers, making it more expressive and personal. You can also see who has viewed your status by tapping on the eye icon at the bottom of each status. You can also change your profile picture by tapping on it and selecting a photo from your gallery or camera. You can also add a caption or emoji to your profile picture if you want.
- New Business and Shopping Features
- You can now chat with businesses directly from WhatsApp, making it easier to get information, support, or services from them. You can also browse their catalogs, see their products, prices, and descriptions, and place orders without leaving the app. You can also pay for your purchases using WhatsApp Pay, a secure and convenient way to send and receive money through WhatsApp (using your phone's Internet service or Wi-Fi). You can also get receipts and confirmations for your transactions through WhatsApp.
- Conclusion
- WhatsApp is a great app to stay connected with your friends and family, as well as to communicate with businesses and shop online. It is free, easy to use, and secure. It also offers many features and updates that make it more fun and convenient to use. If you want to download WhatsApp on your device, update it to the latest version, or learn more about its new features, you can follow the steps and links we have provided in this article. We hope you found this article helpful and informative. Happy WhatsApping!
- FAQs
- Here are some frequently asked questions about WhatsApp:
-
-How do I create a WhatsApp account?
-To create a WhatsApp account, you need a phone number that can receive SMS or calls. You also need to download the app on your device and follow the instructions to verify your number and set up your profile.
-How do I backup and restore my WhatsApp chats and media?
-To backup your WhatsApp chats and media, you need to link your account to Google Drive (for Android) or iCloud (for iOS). You can then choose how often you want to backup your data (daily, weekly, monthly, or manually). To restore your WhatsApp chats and media, you need to reinstall the app on your device and follow the instructions to restore your data from Google Drive or iCloud.
-How do I delete my WhatsApp account?
-To delete your WhatsApp account, you need to open the app and go to Settings > Account > Delete my account. You will then need to enter your phone number and tap on "Delete my account". This will erase your account information, profile photo, status, groups, chats, and media from WhatsApp's servers. It will also delete your backup data from Google Drive or iCloud.
-How do I block or unblock someone on WhatsApp?
-To block someone on WhatsApp, you need to open the chat with that person and tap on their name or profile picture. You will then see an option to "Block" them. To unblock someone on WhatsApp, you need to go to Settings > Account > Privacy > Blocked contacts. You will then see a list of blocked contacts and an option to "Unblock" them.
-How do I change my WhatsApp settings?
-To change your WhatsApp settings, you need to go to Settings > Account > Privacy. You will then see various options to change your settings, such as who can see your last seen, profile photo, status, online status, read receipts, live location, etc. You can also change other settings such as notifications, data usage, storage usage, chats wallpaper, etc.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Stream 60 Segundos a Masterpiece by Bruno M.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Stream 60 Segundos a Masterpiece by Bruno M.md
deleted file mode 100644
index cff0e3445d6a0e254f6f9410358cbc1e301c48de..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download and Stream 60 Segundos a Masterpiece by Bruno M.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-How to Download Bruno M's "60 segundos" from YouTube
- Do you love Angolan music? Do you want to enjoy one of the best songs by one of the most talented and creative artists in Africa? Do you want to listen to it anytime and anywhere, even without an internet connection? If you answered yes to any of these questions, then this article is for you. In this article, we will show you how to download Bruno M's song "60 segundos" from YouTube, so you can enjoy it offline on your computer or mobile device. But first, let's find out more about Bruno M and his music.
- Who is Bruno M and why you should listen to his music
- Bruno M is a singer, songwriter, and producer from Angola. He is one of the pioneers of kuduro, a genre of music and dance that originated in Luanda in the late 1980s and early 1990s. Kuduro combines elements of African percussion, electronic beats, rap, and Portuguese lyrics. It is a fast-paced, energetic, and catchy style that reflects the urban culture and social issues of Angola.
-download bruno m 60 segundos Download ✪ https://urlca.com/2uOfaI
- Bruno M's biography and musical style
- Bruno M was born in 1986 in Luanda, the capital city of Angola. He grew up in a poor neighborhood, where he was exposed to various musical influences, such as semba, zouk, rumba, hip hop, and reggae. He started making music at a young age, using homemade instruments and recording devices. He also participated in street battles and competitions with other kuduro artists.
- Bruno M's musical style is characterized by his unique voice, his witty and provocative lyrics, his innovative use of samples and effects, and his fusion of kuduro with other genres, such as house, funk, soul, jazz, and rock. He is known for his versatility, his originality, and his ability to create catchy hooks and melodies. He is also known for his social commentary, his criticism of corruption and injustice, his celebration of Angolan culture and identity, and his positive messages of hope and empowerment.
-download bruno m 60 segundos mp3
-download bruno m 60 segundos song
-download bruno m 60 segundos lyrics
-download bruno m 60 segundos video
-download bruno m 60 segundos album
-download bruno m 60 segundos free
-download bruno m 60 segundos online
-download bruno m 60 segundos jiosaavn
-download bruno m 60 segundos muporty
-download bruno m 60 segundos music
-download bruno m 60 segundos audio
-download bruno m 60 segundos hd
-download bruno m 60 segundos remix
-download bruno m 60 segundos instrumental
-download bruno m 60 segundos karaoke
-download bruno m 60 segundos ringtone
-download bruno m 60 segundos spotify
-download bruno m 60 segundos youtube
-download bruno m 60 segundos soundcloud
-download bruno m 60 segundos apple music
-download bruno m 60 segundos amazon music
-download bruno m 60 segundos deezer
-download bruno m 60 segundos tidal
-download bruno m 60 segundos napster
-download bruno m 60 segundos pandora
-download bruno m 60 segundos google play music
-download bruno m 60 segundos iheartradio
-download bruno m 60 segundos tunein
-download bruno m 60 segundos slacker radio
-download bruno m 60 segundos last.fm
-download bruno m 60 segundos shazam
-download bruno m 60 segundos genius
-download bruno m 60 segundos musixmatch
-download bruno m 60 segundos azlyrics
-download bruno m 60 segundos metrolyrics
-download bruno m 60 segundos lyricstranslate
-download bruno m 60 segundos songmeanings
-download bruno m 60 segundos songfacts
-download bruno m 60 segundos discogs
-download bruno m 60 segundos allmusic
-download bruno m 60 segundos rateyourmusic
-download bruno m 60 segundos billboard
-download bruno m 60 segundos rolling stone
-download bruno m 60 segundos pitchfork
-download bruno m 60 segundos nme
-download bruno m 60 segundos spin
-download bruno m 60 segundos stereogum
-download bruno m 60 segundos consequence of sound
- Bruno M's most popular songs and albums
- Bruno M has released several songs and albums that have become hits in Angola and beyond. Some of his most popular songs include:
-
-"Ewe" (2005), a song that denounces the violence and poverty in Angola.
-"1 para 2" (2006), a song that mocks the greediness and hypocrisy of some politicians.
-"Tchubila" (2008), a song that praises the beauty and diversity of Angolan women.
-"Azubayo" (2010), a song that encourages people to dance and have fun.
-"60 segundos" (2014), a song that challenges people to live every moment as if it was their last.
-
- Some of his most popular albums include:
-
-"Batida Única" (2006), his debut album that introduced him to the public.
-"Ekuikui II" (2008), his second album that consolidated his fame and success.
-"Alter Ego" (2010), his third album that showed his artistic evolution and maturity.
-"Live & Love Musik" (2014), his fourth album that reflected his personal and professional growth.
-
- Bruno M's impact on Angolan and African music scene
- Bruno M is not only a successful and influential artist in Angola, but also in Africa and the world. He has won several awards and nominations, such as the Angola Music Awards, the MTV Africa Music Awards, the Kora Awards, and the Channel O Music Video Awards. He has also performed in various countries and festivals, such as the Festa2H in Senegal, the Festival Mundial in the Netherlands, the Africa Express in the UK, and the Rock in Rio in Brazil. He has also collaborated with other artists, such as Buraka Som Sistema, DJ Djeff, C4 Pedro, Anselmo Ralph, and Nelson Freitas.
- Bruno M has contributed to the promotion and recognition of kuduro as a legitimate and valuable musical expression. He has also inspired and supported many young and emerging artists who follow his footsteps. He has been praised by critics and fans alike for his creativity, his authenticity, and his charisma. He is widely regarded as one of the best kuduro artists of all time.
- What is "60 segundos" and why you should download it
- "60 segundos" is one of Bruno M's most recent and popular songs. It was released in 2014 as part of his fourth album "Live & Love Musik". It is a song that combines kuduro with other musical influences, such as pop, rock, and soul. It is a song that has a powerful and catchy chorus, a smooth and melodic verse, and a dynamic and energetic bridge. It is a song that will make you want to dance, sing, and enjoy life.
- The meaning and message of the song
- The title of the song means "60 seconds" in Portuguese. The song is about living every moment as if it was your last. The song is about appreciating what you have, what you do, and who you are. The song is about not wasting time on regrets, worries, or fears. The song is about being happy, grateful, and optimistic. The song is about making every second count.
- The lyrics of the song are simple but profound. They are full of metaphors and analogies that illustrate the concept of time and life. For example, Bruno M compares life to a roller coaster, a game, a movie, a book, a song, a dream, and a journey. He also uses rhetorical questions to engage the listener and make them reflect on their own choices and actions. For example, he asks: "What would you do if you only had 60 seconds to live?"
- The production and reception of the song
- The song was produced by Bruno M himself, with the help of some of his friends and colleagues. The song features Bruno M's vocals, as well as some backing vocals by other singers. The song also features some instruments, such as drums, keyboards, guitars, basses, and synthesizers. The song has a high-quality sound that matches Bruno M's standards and expectations.
- The song was well received by both critics and fans. The song received positive reviews from various media outlets, such as Platina Line, Sapo Angola, Bantumen, and Jet7 Angola. The song also received a lot of airplay on radio stations and TV channels in Angola and other countries. The song also received a lot of views and likes on YouTube and other social media platforms. The song also received a lot of feedback and comments from the listeners, who expressed their admiration and appreciation for Bruno M and his music.
- The benefits of listening to the song offline
- Listening to "60 segundos" online is great, but listening to it offline is even better. Why? Because listening to it offline has many benefits, such as:
-
-You can listen to it anytime and anywhere, even without an internet connection or a Wi-Fi signal.
-You can save your data and battery, especially if you have a limited or expensive plan or a low or old device.
-You can avoid ads and interruptions, especially if you use a free or basic service or app.
-You can enjoy a better sound quality, especially if you download it in a high or optimal format or resolution.
-You can have more control and flexibility, especially if you want to play, pause, skip, repeat, shuffle, or edit the song.
-
- So, how can you download "60 segundos" from YouTube and listen to it offline? There are several ways to do it, but we will show you three of the most common and easy ones. Let's see them.
- How to download "60 segundos" from YouTube legally and safely
- Before we start, we want to remind you that downloading music from YouTube is not always legal or safe. YouTube has its own terms of service and policies that prohibit downloading content without permission or authorization from the owners or creators. Downloading music from YouTube may also expose you to viruses, malware, spyware, or other threats that may harm your device or data. Therefore, we advise you to be careful and responsible when downloading music from YouTube. We also advise you to respect the rights and interests of Bruno M and other artists who work hard to create and share their music with us.
- That being said, here are three ways to download "60 segundos" from YouTube legally and safely:
- The official way: YouTube Music Premium or YouTube Premium
- The official way to download music from YouTube is to use YouTube Music Premium or YouTube Premium. These are subscription-based services that allow you to download music and videos from YouTube and watch or listen to them offline. They also offer other features, such as ad-free playback, background play, unlimited skips, personalized recommendations, and access to exclusive content.
- To use these services, you need to pay a monthly fee that varies depending on your location and preferences. You can also get a free trial for a limited period of time. To download "60 segundos" from YouTube using these services, you need to follow these steps:
-
-Download the YouTube Music app or the YouTube app on your device.
-Sign in with your Google account or create one if you don't have one.
-Subscribe to YouTube Music Premium or YouTube Premium or start your free trial.
-Search for "60 segundos" by Bruno M on the app.
-Select the song and tap on the download icon next to it.
-Choose the quality and format of the download.
-Wait for the download to finish and enjoy the song offline.
-
- The alternative way: 4K Video Downloader or MediaHuman
- The alternative way to download music from YouTube is to use a software program that allows you to download music and videos from YouTube and other websites. There are many programs that offer this service, but we will recommend two of them: 4K Video Downloader and MediaHuman. These are free and easy-to-use programs that allow you to download music and videos from YouTube in high quality and various formats. They also offer other features, such as batch downloads, playlists downloads, subtitles downloads, smart mode, etc.
- To use these programs, you need to download them on your computer and install them. To download "60 segundos" from YouTube using these programs, you need to follow these steps:
-
-Open your browser and go to YouTube.com.
-Search for "60 segundos" by Bruno M on YouTube.
-Select the song and copy its URL from the address bar.
-Open 4K Video Downloader or MediaHuman on your computer.
-Paste the URL into the program and click on the download button.
-Choose the quality and format of the download.
-Wait for the download to finish and enjoy the song offline.
-
- The online way: Online Converters
- The online way to download music from YouTube is to use an online converter that allows you to convert and download music and videos from YouTube and other websites. There are many online converters that offer this service, but we will recommend two of them: Y2mate and OnlineVideoConverter. These are free and easy-to-use online converters that allow you to convert and download music and videos from YouTube in high quality and various formats. They also offer other features, such as editing, cropping, cutting, merging, etc.
- To use these online converters, you don't need to download or install anything on your device. You just need to have a browser and an internet connection. To download "60 segundos" from YouTube using these online converters, you need to follow these steps:
-
-Open your browser and go to YouTube.com.
-Search for "60 segundos" by Bruno M on YouTube.
-Select the song and copy its URL from the address bar.
-Open a new tab and go to Y2mate.com or OnlineVideoConverter.com.
-Paste the URL into the online converter and click on the start or convert button.
-Choose the quality and format of the download.
-Click on the download or save button and choose a location to save the file.
-Wait for the download to finish and enjoy the song offline.
-
- Conclusion and FAQs
- In conclusion, downloading Bruno M's song "60 segundos" from YouTube is possible and easy. You just need to choose one of the three ways we showed you: the official way, the alternative way, or the online way. Each way has its own advantages and disadvantages, so you need to consider your preferences and needs before choosing one. However, whichever way you choose, you will be able to enjoy one of the best songs by one of the best artists in Africa offline.
- We hope this article was helpful and informative for you. If you have any questions or doubts, please check the FAQs below. If you still have any questions or doubts, please leave a comment or contact us. We will be happy to assist you.
- FAQs
- Here are some of the most frequently asked questions about downloading Bruno M's song "60 segundos" from YouTube:
-
-Is downloading music from YouTube legal?
-Downloading music from YouTube is not always legal. It depends on the terms of service and policies of YouTube, as well as the laws and regulations of your country. Generally speaking, downloading music from YouTube without permission or authorization from the owners or creators is illegal and may result in legal consequences. Therefore, we advise you to respect the rights and interests of Bruno M and other artists who work hard to create and share their music with us.
-Is downloading music from YouTube safe?
-Downloading music from YouTube is not always safe. It depends on the source and method of downloading, as well as the security and protection of your device and data. Generally speaking, downloading music from YouTube may expose you to viruses, malware, spyware, or other threats that may harm your device or data. Therefore, we advise you to be careful and responsible when downloading music from YouTube. We also advise you to use a reliable antivirus software and a VPN service to protect your device and data.
-What is the best quality and format for downloading music from YouTube?
-The best quality and format for downloading music from YouTube depends on your preferences and needs. Generally speaking, the higher the quality and format, the better the sound quality, but also the larger the file size. Therefore, you need to balance between quality and size according to your device's storage capacity and playback capability. However, whichever quality and format you choose, make sure it is compatible with your device's media player.
-How can I support Bruno M and his music?
-The best way to support Bruno M and his music is to buy his songs and albums from his official website or other authorized platforms, such as iTunes, Spotify, Amazon, etc. You can also support him by following him on his social media accounts, such as Facebook, Instagram, Twitter, etc. You can also support him by sharing his music with your friends and family, by giving him positive feedback and comments, by attending his concerts and events, and by joining his fan club and community.
- Where can I find more information about Bruno M and his music?
-If you want to find more information about Bruno M and his music, you can visit his official website at www.brunom.com. There you can find his biography, discography, news, videos, photos, merchandise, and contact details. You can also visit his Wikipedia page at , where you can find more facts and details about his life and career. You can also search for his name on Google or YouTube, where you can find more articles and videos about him and his music.
-
- : https://en.wikipedia.org/wiki/Bruno_M_(singer) 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Survive Your Neighbors Traps in Hello Neighbor Search and Rescue APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Survive Your Neighbors Traps in Hello Neighbor Search and Rescue APK.md
deleted file mode 100644
index d4b4d193da7cc8d4a2e243f83a9d8b3adf7cb02d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Survive Your Neighbors Traps in Hello Neighbor Search and Rescue APK.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
- - How to download and install Hello Neighbor Search and Rescue APK? - What are the features of Hello Neighbor Search and Rescue APK? - What are the tips and tricks for playing Hello Neighbor Search and Rescue APK? - What are the pros and cons of Hello Neighbor Search and Rescue APK? | | H2: What is Hello Neighbor Search and Rescue APK? | - A brief introduction to the game and its genre - A summary of the plot and the main characters - A comparison with the original Hello Neighbor game and other spin-offs | | H2: How to download and install Hello Neighbor Search and Rescue APK? | - A step-by-step guide on how to download the APK file from a reliable source - A list of the system requirements and compatibility issues - A warning about the potential risks of downloading APK files from unknown sources | | H2: What are the features of Hello Neighbor Search and Rescue APK? | - A description of the gameplay mechanics and the VR experience - A highlight of the graphics, sound effects, and music - A mention of the different modes, levels, and challenges | | H2: What are the tips and tricks for playing Hello Neighbor Search and Rescue APK? | - A collection of useful tips and tricks for solving puzzles, avoiding traps, and escaping the neighbor - A suggestion of the best VR devices and controllers to use - A recommendation of the optimal settings and adjustments for a smooth gameplay | | H2: What are the pros and cons of Hello Neighbor Search and Rescue APK? | - A balanced evaluation of the strengths and weaknesses of the game - A comparison with other similar VR games in the market - A personal opinion on whether the game is worth playing or not | | H2: Conclusion | - A summary of the main points and the key takeaways - A call to action for the readers to try out the game or share their feedback - A link to the official website or social media pages of the game | | H2: FAQs | - A list of five frequently asked questions about Hello Neighbor Search and Rescue APK with brief answers | Article with HTML formatting Hello Neighbor Search and Rescue APK: A VR Horror-Puzzle Game
-If you are a fan of stealth horror games, you might have heard of Hello Neighbor, a popular game where you sneak into your creepy neighbor's house to discover his secrets. But did you know that there is a new VR spin-off that takes the horror to a whole new level? It's called Hello Neighbor Search and Rescue APK, and it's a groundbreaking VR game that will test your nerves, skills, and wits. In this article, we will tell you everything you need to know about this game, including how to download it, what features it has, how to play it, and what are its pros and cons. Let's get started!
-hello neighbor search and rescue apk Download File — https://urlca.com/2uOdqW
- What is Hello Neighbor Search and Rescue APK?
-Hello Neighbor Search and Rescue APK is a VR horror-puzzle game developed by tinyBuild Games in collaboration with Hologryph Studios. It is based on the Hello Neighbor universe, but it has a different plot, characters, and gameplay. The game was released in October 2023 for Android devices that support VR headsets.
-The game follows the story of four friends who decide to sneak into their neighbor's house to save their friend who went missing. However, they soon realize that their neighbor is not an ordinary person, but a twisted psychopath who has set up traps, puzzles, and cameras all over his house. The game challenges you to use your logic, creativity, and courage to solve the mysteries, avoid the dangers, and escape from the neighbor.
-Hello Neighbor Search and Rescue APK is different from the original Hello Neighbor game in several ways. First of all, it is a VR game that immerses you in a realistic 3D environment where you can interact with objects, move around freely, and feel like you are actually there. Second, it has multiple playable characters that have their own key items and skills that you can switch between to outsmart the neighbor. Third, it has more variety in terms of modes, levels, and challenges that will keep you entertained for hours.
- How to download and install Hello Neighbor Search and Rescue APK?
-If you want to try out Hello Neighbor Search and Rescue APK, you need to download and install the APK file on your Android device. Here are the steps you need to follow:
-
-Go to a reliable website that offers the APK file of Hello Neighbor Search and Rescue APK, such as [APKPure] or [APKMirror].
-Click on the download button and wait for the file to be downloaded on your device.
-Once the download is complete, locate the file in your device's file manager and tap on it to install it.
-If you see a warning message that says "Install blocked", go to your device's settings and enable the option to install apps from unknown sources.
-Follow the on-screen instructions and grant the necessary permissions to the app.
-Launch the app and enjoy playing Hello Neighbor Search and Rescue APK!
-
- Before you download and install Hello Neighbor Search and Rescue APK, make sure that your device meets the following system requirements:
-hello neighbor vr search and rescue download
-hello neighbor stealth horror game apk
-hello neighbor basement secrets apk
-hello neighbor vr search and rescue steam
-hello neighbor sneaking into neighbor's house apk
-hello neighbor vr search and rescue release date
-hello neighbor advanced AI apk
-hello neighbor vr search and rescue trailer
-hello neighbor survival horror adventure apk
-hello neighbor vr search and rescue review
-hello neighbor dark mystery apk
-hello neighbor vr search and rescue gameplay
-hello neighbor family-friendly VR experience apk
-hello neighbor vr search and rescue system requirements
-hello neighbor cartoony graphics apk
-hello neighbor vr search and rescue price
-hello neighbor adaptive AI learns from your mistakes apk
-hello neighbor vr search and rescue developer
-hello neighbor tools and clues apk
-hello neighbor vr search and rescue publisher
-hello neighbor horrible secrets in the basement apk
-hello neighbor vr search and rescue oculus quest 2
-hello neighbor easy to figure out how the game works apk
-hello neighbor vr search and rescue walkthrough
-hello neighbor skill and patience required apk
-hello neighbor vr search and rescue multiplayer
-hello neighbor moustached neighbour apk
-hello neighbor vr search and rescue ps4
-hello neighbor solving the mystery apk
-hello neighbor vr search and rescue xbox one
-hello neighbor offline mode apk
-hello neighbor vr search and rescue wiki
-hello neighbor data safety and privacy apk
-hello neighbor vr search and rescue cheats
-hello neighbor ratings and reviews apk
-hello neighbor vr search and rescue mods
-hello neighbor contains ads in-app purchases apk
-hello neighbor vr search and rescue free download
-hello neighbor 50M+ downloads apk
-hello neighbor vr search and rescue update
-hello neighbor everyone 10+ rating apk
-hello neighbor vr search and rescue reddit
-hello neighbor single player mode apk
-hello neighbor vr search and rescue discord
-hello neighbor stylized art style apk
-
-Android version: 7.0 or higher
-RAM: 4 GB or more
-Storage space: 2 GB or more
-VR headset: Google Cardboard, Samsung Gear VR, Oculus Quest, or any compatible VR device
-
- Please note that downloading APK files from unknown sources can be risky, as they may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Therefore, we recommend that you only download APK files from trusted and verified websites, and scan them with an antivirus software before installing them.
- What are the features of Hello Neighbor Search and Rescue APK?
-Hello Neighbor Search and Rescue APK is a game that offers a lot of features that make it a unique and thrilling VR experience. Some of these features are:
-
-A realistic and immersive 3D environment that lets you explore the neighbor's house in VR.
-A dynamic AI system that adapts to your actions and changes the neighbor's behavior accordingly.
-A variety of puzzles, traps, and secrets that you need to solve, avoid, and discover.
-A multiple character system that allows you to switch between four different characters with their own key items and skills.
-A diverse range of modes, levels, and challenges that offer different scenarios, objectives, and difficulties.
-A stunning graphics quality that enhances the visual appeal of the game.
-A creepy sound design that creates a tense and suspenseful atmosphere.
-A catchy soundtrack that matches the mood and tone of the game.
-
- What are the tips and tricks for playing Hello Neighbor Search and Rescue APK?
-Hello Neighbor Search and Rescue APK is a game that requires a lot of skill, strategy, and courage to play. Here are some tips and tricks that can help you master the game:
-
-Use your VR headset and controller to look around, move, interact, and switch characters.
-Pay attention to the clues, hints, and noises that can guide you to the right path or warn you of the dangers.
-Use your key items wisely and creatively to unlock doors, distract the neighbor, or access hidden areas.
-Learn from your mistakes and try different approaches if you fail or get caught by the neighbor.
-Be stealthy and avoid making noise or leaving traces that can alert the neighbor of your presence.
-Be quick and decisive when facing the neighbor or escaping from him.
-Use your skills wisely and strategically to overcome obstacles, solve puzzles, or fight back against the neighbor.
-Have fun and enjoy the thrill of being in a VR horror-puzzle game!
-
- What are the pros and cons of Hello Neighbor Search and Rescue APK?
-Hello Neighbor Search and Rescue APK is a game that has many pros and cons that can affect your enjoyment of playing it. Here are some of them:
-
-Pros Cons
-- A unique and innovative VR horror-puzzle game that offers a lot of fun and excitement. - A challenging and difficult game that can be frustrating or scary for some players.
-- A realistic and immersive 3D environment that makes you feel like you are actually in the game. - A high system requirement that may not be compatible with some devices or VR headsets.
-- A dynamic AI system that makes the game unpredictable and replayable. - A potential risk of downloading APK files from unknown sources that may contain viruses, malware, or spyware.
-- A variety of features that make the game diverse, interesting, and enjoyable. - A lack of a tutorial or a help menu that can make the game confusing or hard to understand.
-- A stunning graphics quality that enhances the visual appeal of the game. - A possibility of experiencing motion sickness or discomfort due to the VR experience.
-- A creepy sound design that creates a tense and suspenseful atmosphere. - A need of a stable internet connection and enough storage space to download and play the game.
-
- Conclusion
-Hello Neighbor Search and Rescue APK is a VR horror-puzzle game that offers a unique and thrilling experience for fans of the genre. It is a game that will challenge your nerves, skills, and wits as you sneak into your neighbor's house to save your friend. It is a game that will immerse you in a realistic and immersive 3D environment where you can interact with objects, move around freely, and feel like you are actually there. It is a game that will surprise you with its dynamic AI system, its variety of features, and its stunning graphics quality. It is a game that will entertain you for hours with its different modes, levels, and challenges.
-If you are looking for a VR game that will make you scream, laugh, and think, Hello Neighbor Search and Rescue APK is the game for you. You can download it from the links provided below and enjoy playing it on your Android device with your VR headset. But be careful, because your neighbor is watching you...
-Do you have any questions or feedback about Hello Neighbor Search and Rescue APK? Feel free to leave a comment below or contact us through our official website or social media pages. We would love to hear from you!
-Website: [Hello Neighbor]
-Facebook: [Hello Neighbor Game]
-Twitter: [@tinyBuild]
- FAQs
-What is the difference between Hello Neighbor Search and Rescue APK and Hello Neighbor Hide and Seek APK?
-Hello Neighbor Search and Rescue APK and Hello Neighbor Hide and Seek APK are both VR spin-offs of the original Hello Neighbor game, but they have different plots, characters, and gameplay. Hello Neighbor Search and Rescue APK is about four friends who sneak into their neighbor's house to save their missing friend, while Hello Neighbor Hide and Seek APK is about two siblings who play hide and seek in their house while their father is away.
- How long does it take to finish Hello Neighbor Search and Rescue APK?
-The length of Hello Neighbor Search and Rescue APK depends on your skill level, your chosen mode, and your progress in the game. However, on average, it takes about 4 to 6 hours to complete the main story mode of the game.
- Can I play Hello Neighbor Search and Rescue APK without a VR headset?
-No, you cannot play Hello Neighbor Search and Rescue APK without a VR headset. The game is designed to be played in VR mode only, as it uses the VR technology to create a realistic and immersive 3D environment. You need a compatible VR headset and controller to play the game.
- Is Hello Neighbor Search and Rescue APK suitable for children?
-No, Hello Neighbor Search and Rescue APK is not suitable for children. The game is rated 12+ by the Google Play Store, as it contains violence, blood, horror, and scary themes that may not be appropriate for younger audiences. The game may also cause fear, anxiety, or distress for some players due to its intense VR experience.
- Where can I find more information about Hello Neighbor Search and Rescue APK?
-You can find more information about Hello Neighbor Search and Rescue APK on its official website or social media pages. You can also check out the reviews, ratings, videos, screenshots, and articles about the game on various online platforms such as YouTube, Reddit, Steam, or Google Play Store.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/JNTUH Hall Ticket 2023 - Download for UG PG B.Tech B.Pharmacy Courses.md b/spaces/congsaPfin/Manga-OCR/logs/JNTUH Hall Ticket 2023 - Download for UG PG B.Tech B.Pharmacy Courses.md
deleted file mode 100644
index 926c6a8d67ab0a8dfb23c463d83fe431400ff5a6..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/JNTUH Hall Ticket 2023 - Download for UG PG B.Tech B.Pharmacy Courses.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-JNTUH Hall Ticket Download 2023: A Complete Guide for Students
-Jawaharlal Nehru Technological University Hyderabad (JNTUH) is one of the leading universities in India that offers various undergraduate, postgraduate and doctoral programs in engineering, technology, pharmacy, management and other fields. Every year, thousands of students appear for different exams conducted by JNTUH to pursue their academic goals.
-jntuh hall ticket download 2023 Download Zip »»» https://urlca.com/2uO4yd
-One of the most important documents that students need to carry for these exams is the hall ticket or admit card. The hall ticket is a proof of identity and eligibility that contains essential information such as the name, roll number, exam date, time, venue, instructions and more. Without the hall ticket, students will not be allowed to enter the exam hall or write the exam.
-In this article, we will provide you with all the information you need to know about JNTUH hall ticket download 2023 for various courses and exams. We will also answer some frequently asked questions and give you some tips to prepare well for your exams.
- How to download JNTUH hall tickets 2023 for various courses and exams?
-JNTUH releases the hall tickets for different courses and exams on its official website - jntuh.ac.in . Students can download their hall tickets by following these steps:
-
-Visit the official website of JNTUH - jntuh.ac.in .
-On the homepage, click on the "Examinations" tab and select your course and exam from the drop-down menu.
-You will be redirected to a new page where you will find the link for downloading the hall ticket.
-Click on the link and enter your registration number, date of birth and other details as required.
-Verify your details and click on the "Submit" button.
-Your hall ticket will be displayed on the screen. You can download it as a PDF file or take a printout of it.
-
-Note: Some courses and exams may require students to collect their hall tickets from their respective colleges or exam centers. In such cases, students should contact their college authorities or exam coordinators for more details.
- What are the details and instructions present on JNTUH hall tickets 2023?
-The JNTUH hall tickets 2023 contain important details and instructions that students should check carefully before appearing for their exams. Some of these details and instructions are:
-
-Name of the student
-Registration number
-Course name and code
-Semester/year
-Exam name and date
-Exam time and duration
-Exam center name and address
-Photograph and signature of the student
-General instructions for the exam
-COVID-19 guidelines for the exam
-
-Students should ensure that all the details on their hall tickets are correct and match with their identity proofs. If there is any discrepancy or error in the hall ticket, they should report it to their college authorities or exam coordinators as soon as possible.
- What are the important dates and events related to JNTUH exams 2023?
-JNTUH conducts various exams for different courses throughout the year. The dates and events related to these exams may vary depending on the course, semester, regulation and mode of exam. However, some of the common dates and events that students should keep in mind are:
-
-Date/Event Description
-Notification release date This is the date when JNTUH releases the official notification for a particular exam on its website. The notification contains the details of the exam such as the eligibility criteria, exam fee, exam schedule, syllabus, etc.
-Hall ticket release date This is the date when JNTUH releases the hall tickets for a particular exam on its website or through the colleges. Students can download their hall tickets by following the steps mentioned above.
-Exam date This is the date when the exam is conducted at various centers across the state. Students should reach the exam center at least 30 minutes before the exam time and carry their hall tickets and identity proofs.
-Result declaration date This is the date when JNTUH declares the results of a particular exam on its website. Students can check their results by entering their hall ticket number and other details.
-Revaluation/Recounting date This is the date when JNTUH allows the students to apply for revaluation or recounting of their answer scripts if they are not satisfied with their results. Students have to pay a fee and submit an online application for this process.
-
- Conclusion: Summary of the main points and tips for students
-In this article, we have covered everything you need to know about JNTUH hall ticket download 2023 for various courses and exams. We have explained what is JNTUH, why do students need to download hall tickets, how to download hall tickets, what are the details and instructions present on hall tickets, and what are the important dates and events related to JNTUH exams 2023.
-jntuh hall ticket download 2023 for b.tech
-jntuh hall ticket download 2023 for b.pharmacy
-jntuh hall ticket download 2023 for ug and pg courses
-jntuh hall ticket download 2023 for regular and supply exams
-jntuh hall ticket download 2023 for r18, r16, r15, r13, r09 regulations
-jntuh hall ticket download 2023 for 1st, 2nd, 3rd, 4th year
-jntuh hall ticket download 2023 for 1-1, 2-1, 3-1, 4-1 semesters
-jntuh hall ticket download 2023 for october and november exams
-jntuh hall ticket download 2023 from official website
-jntuh hall ticket download 2023 from manabadi.co.in
-jntuh hall ticket download 2023 from jntufastresult.com
-jntuh hall ticket download 2023 from fresherslive.com
-how to download jntuh hall ticket 2023 online
-how to get jntuh hall ticket 2023 from college
-how to check jntuh hall ticket 2023 details
-what are the instructions to follow for jntuh hall ticket 2023
-what are the covid-19 guidelines for jntuh hall ticket 2023
-what to do if there is any discrepancy in jntuh hall ticket 2023
-what are the documents required along with jntuh hall ticket 2023
-what are the consequences of losing or forgetting jntuh hall ticket 2023
-importance of jntuh hall ticket 2023 for exam
-benefits of jntuh hall ticket 2023 for verification
-features of jntuh hall ticket 2023 for identification
-latest updates on jntuh hall ticket 2023 release date
-news and notifications on jntuh hall ticket 2023 availability
-tips and tricks to download jntuh hall ticket 2023 easily
-best practices to keep jntuh hall ticket 2023 safe and secure
-common errors and issues while downloading jntuh hall ticket 2023
-solutions and fixes for jntuh hall ticket 2023 problems
-faqs and doubts on jntuh hall ticket download 2023 process
-We hope that this article has helped you to understand the process of JNTUH hall ticket download 2023 and prepare well for your exams. Here are some tips for students to ace their exams:
-
-Study the syllabus and previous question papers thoroughly and practice solving them within the given time limit.
-Revise the important concepts and formulas regularly and make notes of them.
-Avoid any distractions and stress before and during the exam and focus on your performance.
-Follow the instructions and guidelines given on the hall ticket and in the exam hall carefully and avoid any malpractice or misconduct.
-Check your answers before submitting your answer sheet and ensure that you have marked them correctly.
-
-We wish you all the best for your exams and future endeavors!
- FAQs: Some common questions and answers about JNTUH hall tickets 2023
-Here are some of the frequently asked questions and answers about JNTUH hall tickets 2023 that you may find useful:
- Q1. What if I lose or damage my hall ticket?
-A1. If you lose or damage your hall ticket, you should immediately contact your college authorities or exam coordinators and request for a duplicate hall ticket. You may have to pay a fee and submit a proof of identity for this process.
- Q2. What if I forget to carry my hall ticket or identity proof to the exam center?
-A2. If you forget to carry your hall ticket or identity proof to the exam center, you will not be allowed to enter the exam hall or write the exam. Therefore, it is advisable to keep your hall ticket and identity proof ready in advance and check them before leaving for the exam.
- Q3. What if there is a change in the exam date, time or venue due to any unforeseen circumstances?
-A3. If there is a change in the exam date, time or venue due to any unforeseen circumstances, JNTUH will notify the students through its website or through their colleges. Students should keep checking the official website of JNTUH regularly for any updates or announcements regarding their exams.
- Q4. How can I check my result after writing the exam?
-A4. You can check your result after writing the exam by visiting the official website of JNTUH - jntuh.ac.in . On the homepage, click on the "Results" tab and select your course and exam from the drop-down menu. You will be redirected to a new page where you can enter your hall ticket number and other details to view your result.
- Q5. How can I apply for revaluation or recounting of my answer script?
-A5. You can apply for revaluation or recounting of your answer script by visiting the official website of JNTUH - jntuh.ac.in . On the homepage , click on the "Examinations" tab and select "Revaluation/Recounting" from the drop-down menu. You will be redirected to a new page where you can find the link for applying for revaluation or recounting of your answer script. You will have to pay a fee and submit an online application for this process.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Super Soldier Z Mod APK How to Join a Family and Participate in Multiplayer PVP Games.md b/spaces/congsaPfin/Manga-OCR/logs/Super Soldier Z Mod APK How to Join a Family and Participate in Multiplayer PVP Games.md
deleted file mode 100644
index 1185ab0e73fea76e1e364adc95fc70df854379d8..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Super Soldier Z Mod APK How to Join a Family and Participate in Multiplayer PVP Games.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-Super Soldier Z Mod APK: A Dragon Ball RPG for Android
-If you are a fan of the Dragon Ball series, you might be interested in playing Super Soldier Z, a mobile game that lets you experience the adventures of Goku and his friends. But what if you want to enjoy the game with unlimited resources and features? That's where Super Soldier Z Mod APK comes in. In this article, we will tell you everything you need to know about this modded version of the game, including how to download and install it, what are its features, and how to play it.
- What is Super Soldier Z?
-A mobile game based on the Dragon Ball franchise
-Super Soldier Z is a mobile game developed by 17funs, a Chinese studio that specializes in creating anime-based games. The game is inspired by the Dragon Ball franchise, one of the most popular and influential manga and anime series in the world. The game features many familiar characters from the series, such as Goku, Vegeta, Piccolo, Gohan, Krillin, Frieza, Cell, Majin Buu, and more. You can train and fight with these warriors in various modes and scenarios, such as story mode, arena mode, family mode, etc.
-super soldier z mod apk Download Zip > https://urlca.com/2uOft9
- Features of Super Soldier Z
-Offline hang up, revenue explodes
-One of the unique features of Super Soldier Z is that you can also accumulate resources offline. This means that even when you are not playing the game, you can still get rich resources online, such as coins, gems, equipment, etc. This way, you can quickly upgrade your levels, get free equipment, and become the number one in the entire server.
- New gameplay, big world exploration
-Super Soldier Z also offers a variety of gameplay options for different players. You can play solo or team up with other players in different modes. You can also join a family and participate in family activities, such as pve and multiplayer pvp games. The game also has a large world map that you can explore and find hidden secrets and treasures.
- Powerful characters, one-click switching
-The game also allows you to switch between different characters in battle. You can choose from a roster of powerful fighters from the Dragon Ball series, each with their own skills and abilities. You can also use their special moves and ultimate skills to defeat your enemies. For example, you can use Goku's Kamehameha, Vegeta's Final Flash, Piccolo's Special Beam Cannon, etc.
-super soldier z gameplay android apk download
-super soldier z mobile dragon ball rpg game
-super soldier z offline hang up revenue explodes
-super soldier z new gameplay big world exploration
-super soldier z powerful characters one-click switching
-super soldier z awakening of fetters bursting out of sacrifice
-super soldier z the strong enemy comes the ultimate fierce battle
-super soldier z apk free download apkcombo
-super soldier z game description adventure
-super soldier z latest version update
-super soldier z developer 17funs category adventure
-super soldier z google play id com.jydhs.dsea.ddyw.on
-super soldier z installs 10,000+ app apks
-super soldier z youtube video by zacker gamer
-super soldier z size 1.25gb language english
-super soldier z gift code how to redeem
-super soldier z gameplay and review android / apk
-super soldier z dragon ball rpg android apk download - youtube
-super soldier z 3d graphics and sound effects
-super soldier z tips and tricks for beginners
-super soldier z best heroes and equipment guide
-super soldier z mod apk unlimited money and gems
-super soldier z mod apk no root required
-super soldier z mod apk latest version 2023 download
-super soldier z mod apk features and benefits
-super soldier z hack cheats online generator tool
-super soldier z hack how to get free resources instantly
-super soldier z hack no survey no human verification
-super soldier z hack working for android and ios devices
-super soldier z hack 100% safe and secure to use
-super soldier z cheats codes secrets and glitches
-super soldier z cheats how to unlock all characters and modes
-super soldier z cheats no download no installation needed
-super soldier z cheats easy to use and access online
-super soldier z cheats updated daily with new content
-super soldier z dragon ball rpg mod apk download for pc
-super soldier z dragon ball rpg mod apk how to install on windows 10/8/7
-super soldier z dragon ball rpg mod apk compatible with bluestacks emulator
-super soldier z dragon ball rpg mod apk best settings and performance tips
-super soldier z dragon ball rpg mod apk enjoy on big screen with keyboard and mouse controls
-super soldier z reddit community and discussion forum
-super soldier z reddit share your gameplay screenshots and videos
-super soldier z reddit ask questions and get answers from other players
-super soldier z reddit join the subreddit and stay updated with news and events
-super soldier z reddit participate in contests and giveaways for free rewards
-super soldier z wiki fandom page and information source
-super soldier z wiki learn everything about the game's story and characters
-super soldier z wiki find out the best strategies and tactics for each level and boss fight
-super soldier z wiki explore the game's lore and trivia facts
- Awakening of fetters, bursting out of sacrifice
-You can also customize your characters and equipment in Super Soldier Z. You can train your heroes and equip them with various items that enhance their stats and attributes. You can also awaken their fetters, which are special bonds that increase their power when they fight together. For example, Goku and Vegeta have a fateful rivalry that boosts their combat effectiveness when they are on the same team.
- The strong enemy comes, the ultimate fierce battle
-Finally, Super Soldier Z also challenges you with strong enemies that test your skills and strategy. You can face off against iconic villains from the Dragon Ball series, such as Frieza, Cell, Majin Buu, etc. You can also enter the super universe time and space mode, where you can trigger your ultimate skill to defeat your enemy with one move. This mode requires a high level of concentration and timing.
- How to download and install Super Soldier Z Mod APK?
-Requirements and compatibility
-To download and install Super Soldier Z Mod APK, you need to have an Android device that meets the following requirements:
-
-Android version: 4.4 or higher
-RAM: 2 GB or more
-Storage: 500 MB or more
-Internet connection: required
-
-The game is compatible with most Android devices, but some models may not work properly. If you encounter any problems, please contact the developer for support.
- Steps to download and install
-Follow these steps to download and install Super Soldier Z Mod APK on your device:
-
-Click on the download button below to get the mod apk file.
-Allow unknown sources on your device by going to Settings > Security > Unknown Sources.
-Locate the downloaded file in your file manager and tap on it to install it.
-Wait for the installation to complete and launch the game.
-Enjoy the game with unlimited resources and features.
-
- Benefits of using the mod apk
-By using Super Soldier Z Mod APK, you can enjoy the following benefits:
-
-Unlimited coins and gems: You can use these currencies to buy items, upgrade your characters, unlock new modes, etc.
-All characters unlocked: You can access all the characters in the game without spending any money or time.
-No ads: You can play the game without any interruptions or distractions from annoying ads.
-No root required: You don't need to root your device to use the mod apk, which means you don't risk damaging your device or losing your warranty.
-
- Super Soldier Z Gameplay and Review
-Graphics and sound
-Super Soldier Z has impressive graphics and sound that capture the essence of the Dragon Ball series. The game has colorful and detailed 3D graphics that show the characters, environments, and effects in a realistic way. The game also has dynamic and immersive sound effects that match the actions and events in the game. The game also has original voice acting from the anime cast, which adds more authenticity and emotion to the game.
- Controls and interface
-The game has simple and intuitive controls and interface that make it easy to play. The game has a virtual joystick on the left side of the screen that lets you move your character, and buttons on the right side that let you attack, defend, switch characters, use skills, etc. The game also has a clear and user-friendly interface that shows your health, energy, coins, gems, etc. The game also has a tutorial mode that guides you through the basics of the game.
- Modes and challenges
-The game has various modes and challenges that keep you entertained and challenged. The game has a story mode that follows the plot of the Dragon Ball series, where you can relive the epic battles and events from the anime. The game also has an arena mode, where you can compete with other players online in real-time battles. The game also has a family mode, where you can join a family and cooperate with other members in different activities. The game also has a super universe time and space mode, where you can face off against powerful enemies in a one-on-one showdown.
- Pros and cons
-The game has many pros and cons that you should consider before playing it. Here are some of them:
-
-Pros Cons
-- Fun and addictive gameplay - Requires internet connection
-- Faithful to the Dragon Ball series - May have bugs or glitches
-- Many characters and modes to choose from - May drain battery quickly
-- High-quality graphics and sound - May not work on some devices
-- Free to play with mod apk - May be banned by the developer
-
- Conclusion and FAQs
-In conclusion, Super Soldier Z is a great game for Dragon Ball fans who want to enjoy a mobile RPG based on their favorite anime series. The game has many features and benefits that make it worth playing with the mod apk. However, the game also has some drawbacks and risks that you should be aware of before downloading and installing it. If you have any questions or doubts about the game, you can check out the FAQs below or contact the developer for more information.
- FAQs
-Here are some of the frequently asked questions about Super Soldier Z and its mod apk:
-
-Q: Is Super Soldier Z Mod APK safe to use?
-A: Super Soldier Z Mod APK is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should also be careful not to share your personal or financial information with anyone online, as some mod apk files may contain malware or spyware that can harm your device or data. Q: Is Super Soldier Z Mod APK legal to use?
-A: Super Soldier Z Mod APK is not legal to use, as it violates the terms and conditions of the original game and its developer. By using the mod apk, you are also infringing the intellectual property rights of the Dragon Ball franchise and its creators. Therefore, you may face legal consequences or penalties if you are caught using the mod apk by the authorities or the developer. Q: How can I update Super Soldier Z Mod APK?
-A: Super Soldier Z Mod APK may not be compatible with the latest version of the original game, as the developer may update the game frequently to fix bugs or add new features. Therefore, you may need to download and install a new version of the mod apk whenever there is an update available. You can check for updates on the website where you downloaded the mod apk or on our website. Q: How can I uninstall Super Soldier Z Mod APK?
-A: You can uninstall Super Soldier Z Mod APK by following these steps: - Go to Settings > Apps > Super Soldier Z. - Tap on Uninstall and confirm your choice. - Delete the mod apk file from your device. - Restart your device. Q: How can I contact the developer of Super Soldier Z?
-A: You can contact the developer of Super Soldier Z by using their official email address: 17funs@163.com. You can also follow them on their social media platforms, such as Facebook, Twitter, Instagram, etc. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The L Word Generation Q Season 1 - Where to Download and Watch Online.md b/spaces/congsaPfin/Manga-OCR/logs/The L Word Generation Q Season 1 - Where to Download and Watch Online.md
deleted file mode 100644
index 03b192dbba44daacbf1237f21a243f0f933d247d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The L Word Generation Q Season 1 - Where to Download and Watch Online.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-Download The L Word: Generation Q Season 1
-If you are a fan of the groundbreaking series The L Word, which ran from 2004 to 2009, you might be interested in its sequel, The L Word: Generation Q. This show follows the lives of some of the original characters, as well as new ones, as they navigate love, friendship, career, and identity in Los Angeles. In this article, we will tell you what the show is about, why you should watch it, and how you can download the first season.
- What is The L Word: Generation Q?
-The L Word: Generation Q is a drama series that premiered in 2019 on Showtime. It is a continuation of the popular show The L Word, which focused on a group of lesbian and bisexual women in West Hollywood. The new series features some of the same characters, such as Bette Porter (Jennifer Beals), Alice Pieszecki (Leisha Hailey), and Shane McCutcheon (Katherine Moennig), as well as new ones, such as Dani Nùñez (Arienne Mandi), Sophie Suarez (Rosanny Zayas), Micah Lee (Leo Sheng), Finley (Jacqueline Toboni), and Gigi (Sepideh Moafi). The show explores their personal and professional lives, as well as their relationships with each other and their community.
-download the l word generation q season 1 Download File 🌟 https://urlca.com/2uO75m
- The plot of the show
-The first season of The L Word: Generation Q consists of eight episodes that aired from December 2019 to January 2020. The season follows Bette as she runs for mayor of Los Angeles, Alice as she hosts a talk show, and Shane as she returns to the city after a setback. They also deal with their romantic and family issues, such as Bette's affair with a married woman, Alice's co-parenting with her fiancée Nat and Nat's ex-wife Gigi, and Shane's reconciliation with her estranged wife Quiara. Meanwhile, Dani, Sophie, Micah, Finley, and Gigi face their own challenges and opportunities in love, work, and identity.
- The cast and characters
-The L Word: Generation Q boasts a talented and diverse cast that brings the characters to life. Here are some of the main actors and their roles:
-
-Jennifer Beals as Bette Porter, a former art curator who is running for mayor on a progressive platform.
-Leisha Hailey as Alice Pieszecki, a successful comedian and media personality who hosts her own talk show.
-Katherine Moennig as Shane McCutcheon, a charismatic hairstylist and entrepreneur who owns a bar.
-Arienne Mandi as Dani Nùñez, a PR executive who works for Bette's campaign and is engaged to Sophie.
-Rosanny Zayas as Sophie Suarez, a TV producer who works for Alice's show and is in love with Dani.
-Leo Sheng as Micah Lee, an adjunct professor who teaches social work and has a crush on his neighbor José.
-Jacqueline Toboni as Finley, an assistant at Alice's show who is struggling with her faith and sexuality.
-Sepideh Moafi as Gigi, Nat's ex-wife who gets involved with Alice and causes tension in their relationship.
-
- The reception and ratings
-The L Word: Generation Q received mostly positive reviews from critics and audiences alike
The L Word: Generation Q received mostly positive reviews from critics and audiences alike. The show was praised for its updated and realistic portrayal of LGBTQIA+ people, its diverse and talented cast, its witty and heartfelt dialogue, and its compelling and relevant storylines. The show also received some criticism for its lack of trans representation, its uneven pacing, and its reliance on soap opera tropes. The show has a 81% approval rating on Rotten Tomatoes, based on 36 reviews, with an average rating of 7.17/10. The show also has a 7.3/10 rating on IMDb, based on 4,480 user ratings. The show was renewed for a second season, which premiered in August 2021.
- Why should you watch The L Word: Generation Q?
-If you are looking for a show that is entertaining, engaging, and empowering, you should definitely watch The L Word: Generation Q. Here are some of the reasons why:
- It's a sequel to the iconic original series
-If you loved the original series, you will enjoy seeing some of your favorite characters again, as well as meeting new ones. The show pays homage to the legacy of The L Word, while also creating its own identity and voice. You will get to see how Bette, Alice, and Shane have grown and changed over the years, as well as how they interact with the younger generation. You will also get to see how the LGBTQIA+ culture and community have evolved since the original series ended.
- It's a diverse and inclusive representation of LGBTQIA+ people
-The L Word: Generation Q is one of the most diverse and inclusive shows on television today. It features characters of different races, ethnicities, genders, sexualities, ages, and backgrounds. It showcases the diversity and complexity of LGBTQIA+ experiences and identities, as well as the challenges and joys they face in their everyday lives. It also tackles important social issues, such as racism, sexism, homophobia, transphobia, biphobia, classism, and more.
- It's a fun and engaging drama with relatable themes
-The L Word: Generation Q is not just a show about LGBTQIA+ people; it's a show about people. It explores universal themes that anyone can relate to, such as love, friendship, family, career, identity, and happiness. It balances drama and comedy, romance and friendship, conflict and resolution. It makes you laugh, cry, think, and feel. It's a show that celebrates life in all its forms.
-download the l word generation q season 1 episodes
-download the l word generation q season 1 online free
-download the l word generation q season 1 full episodes
-download the l word generation q season 1 showtime
-download the l word generation q season 1 google play
-download the l word generation q season 1 prime video
-download the l word generation q season 1 finale
-download the l word generation q season 1 trailer
-download the l word generation q season 1 cast
-download the l word generation q season 1 subtitles
-download the l word generation q season 1 torrent
-download the l word generation q season 1 hd
-download the l word generation q season 1 mp4
-download the l word generation q season 1 zip
-download the l word generation q season 1 watch online
-download the l word generation q season 1 recap
-download the l word generation q season 1 review
-download the l word generation q season 1 imdb
-download the l word generation q season 1 rotten tomatoes
-download the l word generation q season 1 metacritic
-download the l word generation q season 1 behind the scenes
-download the l word generation q season 1 soundtrack
-download the l word generation q season 1 theme song
-download the l word generation q season 1 netflix
-download the l word generation q season 1 hulu
-download the l word generation q season 1 dvd
-download the l word generation q season 1 blu ray
-download the l word generation q season 1 amazon
-download the l word generation q season 1 itunes
-download the l word generation q season 1 vudu
-download the l word generation q season 1 fandango now
-download the l word generation q season 1 youtube tv
-download the l word generation q season 1 apple tv+
-download the l word generation q season 1 disney+
-download the l word generation q season 1 peacock tv
-download the l word generation q season 1 paramount+
-download the l word generation q season 1 hbo max
-download the l word generation q season 1 discovery+
-download the l word generation q season 1 britbox
-download the l word generation q season 1 acorn tv
-download the l word generation q season 1 shudder
-download the l word generation q season 1 crunchyroll
-download the l word generation q season 1 funimation
-download the l word generation q season 1 tubi tv
-download the l word generation q season 1 pluto tv
-download the l word generation q season 1 roku channel
-download the l word generation q season 1 imdb tv
-download the l word generation q season 1 redbox on demand
- How can you download The L Word: Generation Q Season 1?
-If you are interested in watching The L Word: Generation Q Season 1, you have several options to download it. Here are some of them:
- Option 1: Subscribe to Showtime and stream online
-The easiest way to watch The L Word: Generation Q Season 1 is to subscribe to Showtime, the network that produces and airs the show. You can get Showtime for $10.99 per month or $109.90 per year. You can also get a free trial for seven days. With Showtime, you can stream the show online on your computer, smartphone, tablet, smart TV, or other devices. You can also download the episodes to watch offline later.
- Option 2: Buy or rent the episodes on digital platforms
-If you don't want to subscribe to Showtime, you can also buy or rent the episodes on various digital platforms, such as Amazon Prime Video, iTunes, Google Play, Vudu, or FandangoNOW. You can buy the whole season for $19.99 or $24.99 (SD or HD), or individual episodes for $1.99 or $2.99 (SD or HD). You can also rent the episodes for $0.99 or $1.99 (SD or HD) for 48 hours.
- Option 3: Get the DVD or Blu-ray discs
-If you prefer physical media over digital media
If you prefer physical media over digital media, you can also get the DVD or Blu-ray discs of The L Word: Generation Q Season 1. You can order them online from Amazon, Walmart, Best Buy, or other retailers. The DVD costs $24.99 and the Blu-ray costs $29.99. The discs include all eight episodes of the season, as well as some bonus features, such as behind-the-scenes footage, interviews, and deleted scenes.
- Conclusion
-The L Word: Generation Q Season 1 is a must-watch for fans of the original series, as well as for anyone who enjoys a good drama with diverse and complex characters. The show offers a realistic and positive representation of LGBTQIA+ people and their stories, as well as a fun and engaging entertainment experience. You can download the season in various ways, depending on your preference and budget. Whether you stream it online, buy or rent it on digital platforms, or get it on DVD or Blu-ray, you will not regret watching this amazing show.
- FAQs
-
-Q: When will The L Word: Generation Q Season 2 be available to download?
-A: The L Word: Generation Q Season 2 premiered on August 8, 2021 on Showtime. It will consist of 10 episodes that will air weekly until October 10, 2021. You can download the episodes on Showtime or other digital platforms after they air, or wait for the DVD or Blu-ray release, which has not been announced yet.
-Q: How can I watch The L Word: Generation Q Season 1 for free?
-A: You can watch The L Word: Generation Q Season 1 for free by signing up for a free trial of Showtime, which lasts for seven days. You can also watch the first episode for free on YouTube or on Showtime's website.
-Q: Where can I watch the original series The L Word?
-A: You can watch the original series The L Word on Showtime, where it is available to stream online or download offline. You can also buy or rent the episodes or seasons on digital platforms, such as Amazon Prime Video, iTunes, Google Play, Vudu, or FandangoNOW. You can also get the DVD or Blu-ray discs from online or offline retailers.
-Q: Who created The L Word: Generation Q?
-A: The L Word: Generation Q was created by Marja-Lewis Ryan, who is also the showrunner and executive producer. She is a writer, director, and producer who has worked on other projects, such as 6 Balloons, The Four-Faced Liar, and CollegeHumor Originals. She is also a lesbian and a fan of the original series.
-Q: What does the Q stand for in The L Word: Generation Q?
-A: The Q stands for queer, which is an umbrella term that encompasses various sexual and gender identities that are not heterosexual or cisgender. It is also a way of acknowledging the diversity and fluidity of LGBTQIA+ people and their experiences.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Try the iOS Launcher 12 APK and get the iPhone 12 features on your Android device.md b/spaces/congsaPfin/Manga-OCR/logs/Try the iOS Launcher 12 APK and get the iPhone 12 features on your Android device.md
deleted file mode 100644
index 61b28c0f0714c9afdee5c935a0abf84e970bf0d2..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Try the iOS Launcher 12 APK and get the iPhone 12 features on your Android device.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-iOS Launcher 12 APK Download: How to Get the iOS 13 Look on Your Android Device
- Introduction
- Do you love the sleek and elegant design of iOS 13, but don't want to switch from your Android device? If so, you're in luck. There's a way to transform your Android phone or tablet into an iOS-like device, without rooting or flashing. All you need is a simple app called iOS Launcher 12.
-ios launcher 12 apk download Download --->>> https://urlca.com/2uO6NK
- What is iOS Launcher 12?
- iOS Launcher 12 is an app that mimics the look and feel of iOS 13 on your Android device. It changes your home screen, app icons, wallpaper, widgets, and more to resemble the Apple interface. It also adds some features that are exclusive to iOS, such as Control Center, Notification Center, Spotlight Search, and Siri.
- Why use iOS Launcher 12?
- There are many reasons why you might want to use iOS Launcher 12 on your Android device. Here are some of them:
-
-You can enjoy the best of both worlds: the functionality of Android and the aesthetics of iOS.
-You can impress your friends and family with your unique and stylish device.
-You can customize your device to suit your preferences and mood.
-You can save money and time by not having to buy a new device or learn a new operating system.
-
- How to download and install iOS Launcher 12
- Downloading and installing iOS Launcher 12 is easy and fast. Just follow these steps:
- Step 1: Download the APK file from a trusted source
- The first thing you need to do is to download the APK file of iOS Launcher 12 from a reliable source. You can use this link to get the latest version of the app. Make sure you have enough storage space on your device before downloading.
- Step 2: Enable unknown sources on your device
- The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but don't worry, it's safe.
- Step 3: Install the APK file and launch the app
- The final thing you need to do is to install the APK file and launch the app. To do this, locate the downloaded file in your file manager and tap on it. You may see a prompt asking for permissions, just tap on Install and wait for it to finish. Once done, tap on Open and enjoy your new iOS-like device.
- How to customize your Android device with iOS Launcher 12
- Now that you have installed iOS Launcher 12, you can start customizing your Android device to make it look more like iOS 13. Here are some things you can do:
-ios launcher 12 apk free download
-ios launcher 12 pro apk download
-ios launcher 12 for android apk download
-ios launcher 12 mod apk download
-ios launcher 12 premium apk download
-ios launcher 12 latest version apk download
-ios launcher 12 full apk download
-ios launcher 12 cracked apk download
-ios launcher 12 no ads apk download
-ios launcher 12 unlocked apk download
-ios launcher 12 update apk download
-ios launcher 12 beta apk download
-ios launcher 12 app store apk download
-ios launcher 12 themes apk download
-ios launcher 12 widgets apk download
-ios launcher 12 wallpaper apk download
-ios launcher 12 icon pack apk download
-ios launcher 12 notification center apk download
-ios launcher 12 control center apk download
-ios launcher 12 lock screen apk download
-ios launcher 12 keyboard apk download
-ios launcher 12 camera apk download
-ios launcher 12 music player apk download
-ios launcher 12 file manager apk download
-ios launcher 12 calculator apk download
-ios launcher 12 contacts apk download
-ios launcher 12 messages apk download
-ios launcher 12 phone apk download
-ios launcher 12 settings apk download
-ios launcher 12 browser apk download
-ios launcher 12 gallery apk download
-ios launcher 12 clock apk download
-ios launcher 12 weather apk download
-ios launcher 12 calendar apk download
-ios launcher 12 notes apk download
-ios launcher 12 reminders apk download
-ios launcher 12 maps apk download
-ios launcher 12 mail apk download
-ios launcher 12 news apk download
-ios launcher 12 photos apk download
-ios launcher 12 videos apk download
-ios launcher 12 podcasts apk download
-ios launcher 12 books apk download
-ios launcher 12 health apk download
-ios launcher 12 wallet apk download
-ios launcher 12 facetime apk download
-ios launcher 12 siri apk download
-ios launcher 12 shortcuts apk download
-ios launcher 12 app library apk download
- Change the wallpaper and icons
- You can change the wallpaper and icons of your device to match the ones on iOS 13. To do this, long press on an empty space on your home screen and tap on Wallpaper or Icon Pack. You can choose from various options or use your own photos.
- Add widgets and shortcuts
- You can also add widgets and shortcuts to your home screen to access your favorite apps and functions quickly. To do this, long press on an empty space on your home screen and tap on Widgets or Shortcuts. You can drag and drop them to your desired position.
- Adjust the settings and preferences
- You can also adjust the settings and preferences of iOS Launcher 12 to make it more comfortable and convenient for you. To do this, swipe up from the bottom of your screen and tap on the gear icon. You can change things like the grid size, the app drawer, the gestures, the notifications, and more.
- Conclusion
- iOS Launcher 12 is a great app that lets you enjoy the iOS 13 look on your Android device. It's easy to download, install, and customize. It also offers some features that are exclusive to iOS, such as Control Center, Notification Center, Spotlight Search, and Siri. If you're looking for a way to spice up your Android device, give iOS Launcher 12 a try.
- Summary of the main points
- In this article, we have covered the following points:
-
-What is iOS Launcher 12 and why use it?
-How to download and install iOS Launcher 12 on your Android device.
-How to customize your Android device with iOS Launcher 12.
-
- Call to action and final thoughts
- If you're interested in trying out iOS Launcher 12, you can download it from this link. It's free and safe to use. You can also check out the developer's website for more information and support. We hope you enjoyed this article and found it helpful. If you did, please share it with your friends and leave us a comment below. Thank you for reading!
- Frequently Asked Questions
- Q: Is iOS Launcher 12 compatible with all Android devices?
-A: iOS Launcher 12 is compatible with most Android devices running Android 4.4 or higher. However, some features may not work on some devices or models due to hardware or software limitations.
- Q: Does iOS Launcher 12 affect the performance or battery life of my device?
-A: iOS Launcher 12 is designed to be lightweight and smooth. It does not consume much resources or power from your device. However, if you experience any issues, you can try clearing the cache or data of the app, or uninstalling and reinstalling it.
- Q: How can I uninstall iOS Launcher 12 from my device?
-A: To uninstall iOS Launcher 12 from your device, you can follow these steps:
-
-Go to Settings > Apps > iOS Launcher 12 and tap on Uninstall.
-Go to Settings > Home > Default Home App and select your original launcher.
-Restart your device.
-
- Q: Can I use other launchers or themes with iOS Launcher 12?
-A: Yes, you can use other launchers or themes with iOS Launcher 12. However, some of them may not be compatible or may override some of the features of iOS Launcher 12. You can try them at your own risk.
- Q: Where can I get more help or feedback about iOS Launcher 12?
-A: You can get more help or feedback about iOS Launcher 12 by contacting the developer via email or visiting their website. You can also join their Facebook group or follow them on Twitter for updates and news.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/consciousAI/question_generation/app.py b/spaces/consciousAI/question_generation/app.py
deleted file mode 100644
index f3117d87dc707fdaab3913c64b7b1207a52c2210..0000000000000000000000000000000000000000
--- a/spaces/consciousAI/question_generation/app.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import gradio as gr
-import torch
-from transformers import (
- pipeline,
- AutoModelForSeq2SeqLM,
- AutoTokenizer
-)
-
-M0 = "consciousAI/question-generation-auto-t5-v1-base-s"
-M1 = "consciousAI/question-generation-auto-t5-v1-base-s-q"
-M2 = "consciousAI/question-generation-auto-t5-v1-base-s-q-c"
-
-M4 = "consciousAI/question-generation-auto-hints-t5-v1-base-s-q"
-M5 = "consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c"
-
-device = ['cuda' if torch.cuda.is_available() else 'cpu'][0]
-
-_m0 = AutoModelForSeq2SeqLM.from_pretrained(M0).to(device)
-_tk0 = AutoTokenizer.from_pretrained(M0, cache_dir="./cache")
-
-_m1 = AutoModelForSeq2SeqLM.from_pretrained(M1).to(device)
-_tk1 = AutoTokenizer.from_pretrained(M1, cache_dir="./cache")
-
-_m2 = AutoModelForSeq2SeqLM.from_pretrained(M2).to(device)
-_tk2 = AutoTokenizer.from_pretrained(M2, cache_dir="./cache")
-
-_m4 = AutoModelForSeq2SeqLM.from_pretrained(M4).to(device)
-_tk4 = AutoTokenizer.from_pretrained(M4, cache_dir="./cache")
-
-_m5 = AutoModelForSeq2SeqLM.from_pretrained(M5).to(device)
-_tk5 = AutoTokenizer.from_pretrained(M5, cache_dir="./cache")
-
-def _formatQs(questions):
- _finalQs = ""
-
- if questions is not None:
- _qList = questions[0].strip().split("?")
-
- qIdx = 1
- if len(_qList) > 1:
- for idx, _q in enumerate(_qList):
- _q = _q.strip()
- if _q is not None and len(_q) !=0:
- _finalQs += str(qIdx) + ". " + _q + "? \n"
- qIdx+=1
- else:
- if len(_qList[0])>1:
- _finalQs = "1. " + str(_qList[0]) + "?"
- else:
- _finalQs = None
- return _finalQs
-
-def _generate(mode, context, hint=None, minLength=50, maxLength=500, lengthPenalty=2.0, earlyStopping=True, numReturnSequences=1, numBeams=2, noRepeatNGramSize=0, doSample=False, topK=0, penaltyAlpha=0, topP=0, temperature=0, model="All"):
-
- predictionM0 = None
- predictionM1 = None
- predictionM2 = None
- predictionM4 = None
- predictionM5 = None
-
- if mode == 'Auto':
- _inputText = "question_context: " + context
-
- if model == "All":
- _encoding = _tk0.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024
- _outputEncoded = _m0.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM0 = [_tk0.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- _encoding = _tk1.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024
- _outputEncoded = _m1.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM1 = [_tk1.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- _encoding = _tk2.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m2.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM2 = [_tk2.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- _encoding = _tk4.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m4.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM4 = [_tk4.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- _encoding = _tk5.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m5.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM5 = [_tk5.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif model == "question-generation-auto-hints-t5-v1-base-s-q-c":
- _encoding = _tk5.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m5.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM5 = [_tk5.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif model == "question-generation-auto-hints-t5-v1-base-s-q":
- _encoding = _tk4.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m4.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM4 = [_tk4.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif model == "question-generation-auto-t5-v1-base-s-q-c":
- _encoding = _tk2.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m2.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM2 = [_tk2.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif model == "question-generation-auto-t5-v1-base-s-q":
- _encoding = _tk1.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024
- _outputEncoded = _m1.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM1 = [_tk1.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif model == "question-generation-auto-t5-v1-base-s":
- _encoding = _tk0.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024
- _outputEncoded = _m0.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM0 = [_tk0.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
- elif mode == 'Hints':
- _inputText = "question_hint: " + hint + "question_context: " + context
-
- _encoding = _tk4.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m4.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM4 = [_tk4.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- _encoding = _tk5.encode(_inputText, return_tensors='pt', truncation=True, padding='max_length').to(device) # max_length=1024 .to(device)
- _outputEncoded = _m5.generate(_encoding,
- min_length=minLength,
- max_length=maxLength,
- length_penalty=lengthPenalty,
- early_stopping=earlyStopping,
- num_return_sequences=numReturnSequences,
- num_beams=numBeams,
- no_repeat_ngram_size=noRepeatNGramSize,
- do_sample=doSample,
- top_k=topK,
- penalty_alpha=penaltyAlpha,
- top_p=topP,
- temperature=temperature
- )
- predictionM5 = [_tk5.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=True) for id in _outputEncoded]
-
- predictionM0 = _formatQs(predictionM0)
- predictionM1 = _formatQs(predictionM1)
- predictionM2 = _formatQs(predictionM2)
- predictionM4 = _formatQs(predictionM4)
- predictionM5 = _formatQs(predictionM5)
-
- return predictionM5, predictionM4, predictionM2, predictionM1, predictionM0
-
-with gr.Blocks() as demo:
- gr.Markdown(value="# Question Generation Demo \n [question-generation-auto-t5-v1-base-s](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s) ✫ [question-generation-auto-t5-v1-base-s-q](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s-q) ✫ [question-generation-auto-t5-v1-base-s-q-c](https://huggingface.co/anshoomehra/question-generation-auto-t5-v1-base-s-q-c) ✫ [question-generation-auto-hints-t5-v1-base-s-q](https://huggingface.co/anshoomehra/question-generation-auto-hints-t5-v1-base-s-q) ✫ [question-generation-auto-hints-t5-v1-base-s-q-c](https://huggingface.co/anshoomehra/question-generation-auto-hints-t5-v1-base-s-q-c)\n\n Please be patient, 5 models may take up to 80 sec to run on CPU")
-
- with gr.Accordion(variant='compact', label='Search Methods: Deteriminstic / Stochastic / Contrastive', open=True):
- with gr.Row():
- mode = gr.Radio(["Auto", "Hints"], value="Auto", label="Mode")
- with gr.Row():
- minLength = gr.Slider(10, 512, 50, step=1, label="Min Length")
- maxLength = gr.Slider(20, 512, 164, step=1, label="Max Length")
- lengthPenalty = gr.Slider(-5, 5, 1, label="Length Penalty")
- earlyStopping = gr.Checkbox(True, label="Early Stopping [EOS]")
- numReturnSequences = gr.Slider(1, 3, 1, step=1, label="Num return Sequences")
- with gr.Row():
- numBeams = gr.Slider(1, 10, 4, step=1, label="Beams")
- noRepeatNGramSize = gr.Slider(0, 5, 3, step=1, label="No Repeat N-Gram Size")
- with gr.Row():
- doSample = gr.Checkbox(label="Do Random Sample")
- topK = gr.Slider(0, 50, 0, step=1, label="Top K")
- penaltyAlpha = gr.Slider(0.0, 1, 0, label="Penalty Alpha")
- topP = gr.Slider(0, 1, 0, label="Top P/Nucleus Sampling")
- temperature = gr.Slider(0.01, 1, 1, label="Temperature")
- with gr.Row():
- model = gr.Dropdown(["question-generation-auto-hints-t5-v1-base-s-q-c", "question-generation-auto-hints-t5-v1-base-s-q", "question-generation-auto-t5-v1-base-s-q-c", "question-generation-auto-t5-v1-base-s-q", "question-generation-auto-t5-v1-base-s", "All"], label="Model", value="question-generation-auto-hints-t5-v1-base-s-q-c")
-
-
- with gr.Accordion(variant='compact', label='Input Values'):
- with gr.Row(variant='compact'):
- contextDefault = "Google LLC is an American multinational technology company focusing on search engine technology, online advertising, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. It has been referred to as 'the most powerful company in the world' and one of the world's most valuable brands due to its market dominance, data collection, and technological advantages in the area of artificial intelligence. Its parent company Alphabet is considered one of the Big Five American information technology companies, alongside Amazon, Apple, Meta, and Microsoft."
- hintDefault = ""
- context = gr.Textbox(contextDefault, label="Context", placeholder="Dummy Context", lines=5)
- hint = gr.Textbox(hintDefault, label="Hint", placeholder="Enter hint here. Ensure the mode is set to 'Hints' prior using hints.", lines=2)
-
- with gr.Accordion(variant='compact', label='Multi-Task Model(s) Sensitive To Hints'):
- with gr.Row(variant='compact'):
- _predictionM5 = gr.Textbox(label="Predicted Questions - question-generation-auto-hints-t5-v1-base-s-q-c [Hints Sensitive]")
- _predictionM4 = gr.Textbox(label="Predicted Questions - question-generation-auto-hints-t5-v1-base-s-q [Hints Sensitive]")
-
- with gr.Accordion(variant='compact', label='Uni-Task Model(s) Non-Sensitive To Hints'):
- with gr.Row(variant='compact'):
- _predictionM2 = gr.Textbox(label="Predicted Questions - question-generation-auto-t5-v1-base-s-q-c [No Hints]")
- _predictionM1 = gr.Textbox(label="Predicted Questions - question-generation-auto-t5-v1-base-s-q [No Hints]")
- _predictionM0 = gr.Textbox(label="Predicted Questions - question-generation-auto-t5-v1-base-s [No Hints]")
-
- with gr.Row():
- gen_btn = gr.Button("Generate Questions")
- gen_btn.click(fn=_generate,
- inputs=[mode, context, hint, minLength, maxLength, lengthPenalty, earlyStopping, numReturnSequences, numBeams, noRepeatNGramSize, doSample, topK, penaltyAlpha, topP, temperature, model],
- outputs=[_predictionM5, _predictionM4, _predictionM2, _predictionM1, _predictionM0]
- )
-
-demo.launch(show_error=True)
\ No newline at end of file
diff --git a/spaces/coraKong/WorldSimulation/CharacterStatistics.py b/spaces/coraKong/WorldSimulation/CharacterStatistics.py
deleted file mode 100644
index b0c85390b277d4eb6a5668cb1482d32b862200e9..0000000000000000000000000000000000000000
--- a/spaces/coraKong/WorldSimulation/CharacterStatistics.py
+++ /dev/null
@@ -1,261 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-
-class CharacterStatistics:
- def __init__(self, characters):
- self.characters = characters
- self.characters_array = np.array([c.to_list() for c in characters])
-
- # 前n名排行榜
- def find_highest_cultivation(self, n=1):
- sorted_characters = sorted(self.characters, key=lambda c: (c.cultivation_rank, c.cultivation_level), reverse=True)
- return sorted_characters[:n]
-
- # 宗族人数排行榜
- def rank_top_clans(self, top_n=4):
- clan_size = {}
- for c in self.characters:
- clan_size[c.clan] = clan_size.get(c.clan, 0) + 1
-
- sorted_clans = sorted(clan_size.items(), key=lambda x: x[1], reverse=True)[:top_n]
-
- return sorted_clans
-
- # real_age、apparent_age、cultivation_level、cultivation_rank 分布图
- def plot_attribute_distribution(self):
- attributes = ['Real Age', 'Apparent Age', 'Cultivation Level', 'Cultivation Rank']
- attribute_indices = [0, 1, 2, 3]
- fig, axs = plt.subplots(2, 2, figsize=(10, 8))
- axs = axs.flatten()
- for i, ax in enumerate(axs):
- ax.hist(self.characters_array[:, attribute_indices[i]], bins=20, edgecolor='black')
- ax.set_xlabel(attributes[i])
- ax.set_ylabel('Frequency')
- plt.tight_layout()
- return fig
-
- # real_age、apparent_age、cultivation_level、cultivation_rank、孩子数量 统计平均数
- def calculate_average_attributes(self):
- real_age_mean = np.mean(self.characters_array[:, 0])
- apparent_age_mean = np.mean(self.characters_array[:, 1])
- cultivation_level_mean = np.mean(self.characters_array[:, 2])
- cultivation_rank_mean = np.mean(self.characters_array[:, 3])
- child_count_mean = np.mean(self.characters_array[:, 19])
- return real_age_mean, apparent_age_mean, cultivation_level_mean, cultivation_rank_mean, child_count_mean
-
- # 画出 cultivation_rank (x轴) 和 4项combat_power的关系(y轴)
- def plot_combat_power_vs_cultivation_rank(self):
-
- fig, axs = plt.subplots(2, 2, figsize=(10, 8))
- axs = axs.flatten()
-
- for i, combat_power in enumerate(['Attack Power', 'Defense Power', 'Attack Speed', 'Health Points']):
-
- combat_power_data = []
- for rank in range(6):
- rank_data = self.characters_array[self.characters_array[:,3]==rank, i+5]
- combat_power_data.append(rank_data)
-
- axs[i].boxplot(combat_power_data)
- axs[i].set_title(combat_power)
- axs[i].set_xlabel('Cultivation Rank')
- axs[i].set_ylabel('Value')
-
- plt.tight_layout()
- return fig
-
- def calculate_average_special_constitution(self):
- special_constitution_mean = np.mean(self.characters_array[:, 9:13], axis=0)
- return special_constitution_mean
-
- def calculate_average_spiritual_roots(self):
- spiritual_roots_mean = np.mean(self.characters_array[:, 13:18], axis=0)
- return spiritual_roots_mean
-
- # 统计无灵根者的数量
- def count_zero_spiritual_roots(self):
- zero_spiritual_roots_count = np.sum(np.all(self.characters_array[:, 13:18] == 0, axis=1))
- return zero_spiritual_roots_count
-
- # 灵根数量分布图
- def plot_sum_spiritual_root_distribution(self):
- spiritual_roots_sum = np.sum(self.characters_array[:, 13:18], axis=1)
-
- fig = plt.figure()
- plt.hist(spiritual_roots_sum, bins=5, range=(0,5))
- plt.xlabel('Number of Spiritual Roots')
- plt.ylabel('Number of Characters')
- plt.title('Distribution of Spiritual Roots')
-
- return fig
-
- # 各灵根人口分布(x轴:5种灵根,y轴:人数)
- def plot_spiritual_roots_distribution(self):
-
- spiritual_roots = self.characters_array[:, 13:18]
-
- means = np.mean(spiritual_roots, axis=0)
- roots = ['Metal', 'Wood', 'Water', 'Fire', 'Earth']
-
- fig = plt.figure()
- plt.bar(roots, means)
- plt.xlabel('Spiritual Roots')
- plt.ylabel('Percentage of Characters')
- plt.title('Distribution of Spiritual Roots in Population')
-
- return fig
-
- # 宗族人数分布图
- def plot_clan_size_distribution(self):
- clan_size = {}
- for c in self.characters:
- clan_size[c.clan] = clan_size.get(c.clan, 0) + 1
-
- sizes = list(clan_size.values())
- fig = plt.figure()
- plt.hist(sizes, bins=20)
- plt.xlabel('Clan Size')
- plt.ylabel('Number of Clans')
- return fig
-
- def summarize(self):
- print("===== Character Statistics Summary =====")
-
- # 打印平均属性
- print("Average Attributes:")
- real_age, apparent_age, cultivation_level, cultivation_rank, child_count = self.calculate_average_attributes()
- print(f"Real Age: {real_age:.2f}")
- print(f"Apparent Age: {apparent_age:.2f}")
- print(f"Cultivation Level: {cultivation_level:.2f}")
- print(f"Cultivation Rank: {cultivation_rank:.2f}")
- print(f"Child Count: {child_count:.2f}")
-
- # 打印平均特质
- print("\nAverage Special Constitutions:")
- special_names = ['战斗', '合欢', '灵龟', '蜉蝣']
- for i, v in enumerate(self.calculate_average_special_constitution()):
- print(f"{special_names[i]}: {v:.2%}")
-
- # 打印平均灵根
- print("\nAverage Spiritual Roots:")
- root_names = ['金', '木', '水', '火', '土']
- for i, v in enumerate(self.calculate_average_spiritual_roots()):
- print(f"{root_names[i]}: {v:.2%}")
-
- # 打印统计图
- print("\nPlotting graphs...")
- self.plot_attribute_distribution()
- # self.plot_combat_power_vs_cultivation_rank()
- self.plot_spiritual_roots_distribution()
- self.plot_sum_spiritual_root_distribution()
- self.plot_clan_size_distribution()
- self.print_top_cultivators()
- # 攻击力排行榜
- self.print_rank('attack_power', name='Attack Power')
- # 防御力排行榜
- self.print_rank('defense_power', name='Defense Power')
- self.print_top_clans()
-
- print("\n===== End Summary =====\n")
-
- # 返回Markdown字符串的总结输出
- def summarize_markdown(self):
- md = "## Character Statistics Summary\n\n"
- md += "### Average Attributes:\n\n"
- real_age, apparent_age, cultivation_level, cultivation_rank, child_count = self.calculate_average_attributes()
- md += f"Real Age: {real_age:.2f}\n\n"
- md += f"Apparent Age: {apparent_age:.2f}\n\n"
- md += f"Cultivation Level: {cultivation_level:.2f}\n\n"
- md += f"Cultivation Rank: {cultivation_rank:.2f}\n\n"
- md += f"Child Count: {child_count:.2f}\n\n"
-
- md += "### Average Special Constitutions:\n\n"
- special_names = ['战斗', '合欢', '灵龟', '蜉蝣']
- for i, v in enumerate(self.calculate_average_special_constitution()):
- md += f"{special_names[i]}: {v:.2%}\n\n"
-
- md += "### Average Spiritual Roots:\n\n"
- root_names = ['金', '木', '水', '火', '土']
- for i, v in enumerate(self.calculate_average_spiritual_roots()):
- md += f"{root_names[i]}: {v:.2%}\n\n"
-
- # md += "### Plotting graphs...\n\n"
- # 画图并保存
- fig1 = self.plot_attribute_distribution()
- # fig1.savefig('./attribute_distribution.png')
- # md += "#### Attribute Distribution\n\n"
- # md += "\n\n"
-
- fig2 = self.plot_spiritual_roots_distribution()
- # fig2.savefig('./spiritual_roots_distribution.png')
- # md += "#### Spiritual Roots Distribution\n\n"
- # md += "\n\n"
-
- fig3 = self.plot_sum_spiritual_root_distribution()
- # fig3.savefig('./sum_spiritual_roots_distribution.png')
- # md += "#### Sum Spiritual Roots Distribution\n\n"
- # md += "\n\n"
-
- fig4 = self.plot_clan_size_distribution()
- # fig4.savefig('clan_size_distribution.png')
- # md += "#### Clan Size Distribution\n\n"
- # md += "\n\n"
-
- md += "#### Top Cultivators\n\n"
- md += self.print_top_cultivators_markdown()
- md += "#### Attack Power Ranking\n\n"
- md += self.print_rank_markdown('attack_power', name='Attack Power')
- md += "#### Defense Power Ranking\n\n"
- md += self.print_rank_markdown('defense_power', name='Defense Power')
- md += "#### Top Clans\n\n"
- md += self.print_top_clans_markdown()
-
- return md, fig1, fig2, fig3, fig4
-
- def print_top_cultivators(self, top_n=10):
-
- print("\n== Top Cultivators ==")
- print("{:<10} {:>10} {:>10} {:>10} {:>10}".format('Name', 'Clan', 'Rank', 'Level', '境界'))
- for c in self.find_highest_cultivation(top_n):
- print("{:<10} {:>10} {:>10} {:>10} {:>10}".format(c.name, c.clan, c.cultivation_rank, c.cultivation_level, c.view_rank()))
-
- def print_top_cultivators_markdown(self, n=10):
- md = f"| Name | Clan | Cultivation Rank | Cultivation Level | 境界\n"
- md += "| ---- | ---- | ---------------- | ---------------- | ----\n"
- for c in self.find_highest_cultivation(n):
- md += f"| {c.name} | {c.clan} | {c.cultivation_rank:.2f} | {c.cultivation_level:.2f} | {c.view_rank()}\n"
- return md
-
- def print_rank(self, rank_key, top_n=10, name=None):
-
- print(f"\n== {'Name' if name is None else name} Rank ==")
-
- sorted_characters = sorted(self.characters, key=lambda c: c.combat_power[rank_key], reverse=True)
-
- print("{:<10} {:>10} {:>10} {:>10} {:>10}".format('Name', rank_key if name is None else name, 'Rank', 'Level', '境界'))
-
- for c in sorted_characters[:top_n]:
- print("{:<10} {:>10}{:>10} {:>10} {:>10}".format(c.name, c.combat_power[rank_key], c.cultivation_rank, c.cultivation_level, c.view_rank()))
-
- def print_rank_markdown(self, attr, name=None, n=10):
- md = f"| Name | {attr if name is None else name} | Cultivation Rank | Cultivation Level | 境界\n"
- md += "| ---- | ---- | ---------------- | ---------------- | ----\n"
- sorted_characters = sorted(self.characters, key=lambda c: c.combat_power[attr], reverse=True)
- for c in sorted_characters[:n]:
- md += f"| {c.name} | {c.combat_power[attr]:.2f} | {c.cultivation_rank:.2f} | {c.cultivation_level:.2f} | {c.view_rank()}\n"
- return md
-
- def print_top_clans(self, top_n=4):
-
- print("\n== Top Clans ==")
- print("{:<10} {:>10}".format('Clan', 'Members'))
-
- for clan, size in self.rank_top_clans(top_n):
- print("{:<10} {:>10}".format(clan, size))
-
- def print_top_clans_markdown(self, n=10):
- md = "| Clan | Clan Size |\n"
- md += "| -------- | --------- |\n"
- for clan, size in self.rank_top_clans(n):
- md += f"| {clan} | {size} |\n"
- return md
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_context.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_context.py
deleted file mode 100644
index ff65bad1b86d7e3a5980bb5b9fc55798dc8df5f4..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_context.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# dataset settings
-dataset_type = 'PascalContextDataset'
-data_root = 'data/VOCdevkit/VOC2010/'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-img_scale = (520, 520)
-crop_size = (480, 480)
-
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/train.txt',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/val.txt',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClassContext',
- split='ImageSets/SegmentationContext/val.txt',
- pipeline=test_pipeline))
diff --git a/spaces/cozyanduofen/bingo/src/components/chat-history.tsx b/spaces/cozyanduofen/bingo/src/components/chat-history.tsx
deleted file mode 100644
index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000
--- a/spaces/cozyanduofen/bingo/src/components/chat-history.tsx
+++ /dev/null
@@ -1,48 +0,0 @@
-import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons"
-
-export function ChatHistory() {
- return (
-
-
- 历史记录
-
-
-
-
-
-
-
-
-
-
-
-
无标题的聊天
-
-
上午1:42
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- )
-}
diff --git a/spaces/crobbi/LipNet/modelutil.py b/spaces/crobbi/LipNet/modelutil.py
deleted file mode 100644
index 6969291694d3f01ac4ac0ad875a81a67f45f6110..0000000000000000000000000000000000000000
--- a/spaces/crobbi/LipNet/modelutil.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from tensorflow.python.ops.numpy_ops import np_config
-np_config.enable_numpy_behavior()
-import os
-from tensorflow.keras.models import Sequential
-from tensorflow.keras.layers import Conv3D, LSTM, Dense, Dropout, Bidirectional, MaxPool3D, Activation, Reshape, SpatialDropout3D, BatchNormalization, TimeDistributed, Flatten
-
-def load_model() -> Sequential:
- model = Sequential()
-
- model.add(Conv3D(128, 3, input_shape=(75,46,140,1), padding='same'))
- model.add(Activation('relu'))
- model.add(MaxPool3D((1,2,2)))
-
- model.add(Conv3D(256, 3, padding='same'))
- model.add(Activation('relu'))
- model.add(MaxPool3D((1,2,2)))
-
- model.add(Conv3D(75, 3, padding='same'))
- model.add(Activation('relu'))
- model.add(MaxPool3D((1,2,2)))
-
- model.add(TimeDistributed(Flatten()))
-
- model.add(Bidirectional(LSTM(128, kernel_initializer='Orthogonal', return_sequences=True)))
- model.add(Dropout(.5))
-
- model.add(Bidirectional(LSTM(128, kernel_initializer='Orthogonal', return_sequences=True)))
- model.add(Dropout(.5))
-
- model.add(Dense(41, kernel_initializer='he_normal', activation='softmax'))
- # print("path",os.path.join('..','models','checkpoint'))
- model.load_weights(os.path.join('models','checkpoint'))
-
- return model
\ No newline at end of file
diff --git a/spaces/crystalai/constellation/Dockerfile b/spaces/crystalai/constellation/Dockerfile
deleted file mode 100644
index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000
--- a/spaces/crystalai/constellation/Dockerfile
+++ /dev/null
@@ -1,2 +0,0 @@
-FROM huggingface/autotrain-advanced:latest
-CMD autotrain app --port 7860
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/TgaImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/TgaImagePlugin.py
deleted file mode 100644
index 67dfc3d3c8e5726c5885b1c62cdcb2553854c4dc..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/TgaImagePlugin.py
+++ /dev/null
@@ -1,255 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# TGA file handling
-#
-# History:
-# 95-09-01 fl created (reads 24-bit files only)
-# 97-01-04 fl support more TGA versions, including compressed images
-# 98-07-04 fl fixed orientation and alpha layer bugs
-# 98-09-11 fl fixed orientation for runlength decoder
-#
-# Copyright (c) Secret Labs AB 1997-98.
-# Copyright (c) Fredrik Lundh 1995-97.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import warnings
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-#
-# --------------------------------------------------------------------
-# Read RGA file
-
-
-MODES = {
- # map imagetype/depth to rawmode
- (1, 8): "P",
- (3, 1): "1",
- (3, 8): "L",
- (3, 16): "LA",
- (2, 16): "BGR;5",
- (2, 24): "BGR",
- (2, 32): "BGRA",
-}
-
-
-##
-# Image plugin for Targa files.
-
-
-class TgaImageFile(ImageFile.ImageFile):
- format = "TGA"
- format_description = "Targa"
-
- def _open(self):
- # process header
- s = self.fp.read(18)
-
- id_len = s[0]
-
- colormaptype = s[1]
- imagetype = s[2]
-
- depth = s[16]
-
- flags = s[17]
-
- self._size = i16(s, 12), i16(s, 14)
-
- # validate header fields
- if (
- colormaptype not in (0, 1)
- or self.size[0] <= 0
- or self.size[1] <= 0
- or depth not in (1, 8, 16, 24, 32)
- ):
- msg = "not a TGA file"
- raise SyntaxError(msg)
-
- # image mode
- if imagetype in (3, 11):
- self.mode = "L"
- if depth == 1:
- self.mode = "1" # ???
- elif depth == 16:
- self.mode = "LA"
- elif imagetype in (1, 9):
- self.mode = "P"
- elif imagetype in (2, 10):
- self.mode = "RGB"
- if depth == 32:
- self.mode = "RGBA"
- else:
- msg = "unknown TGA mode"
- raise SyntaxError(msg)
-
- # orientation
- orientation = flags & 0x30
- self._flip_horizontally = orientation in [0x10, 0x30]
- if orientation in [0x20, 0x30]:
- orientation = 1
- elif orientation in [0, 0x10]:
- orientation = -1
- else:
- msg = "unknown TGA orientation"
- raise SyntaxError(msg)
-
- self.info["orientation"] = orientation
-
- if imagetype & 8:
- self.info["compression"] = "tga_rle"
-
- if id_len:
- self.info["id_section"] = self.fp.read(id_len)
-
- if colormaptype:
- # read palette
- start, size, mapdepth = i16(s, 3), i16(s, 5), s[7]
- if mapdepth == 16:
- self.palette = ImagePalette.raw(
- "BGR;15", b"\0" * 2 * start + self.fp.read(2 * size)
- )
- elif mapdepth == 24:
- self.palette = ImagePalette.raw(
- "BGR", b"\0" * 3 * start + self.fp.read(3 * size)
- )
- elif mapdepth == 32:
- self.palette = ImagePalette.raw(
- "BGRA", b"\0" * 4 * start + self.fp.read(4 * size)
- )
-
- # setup tile descriptor
- try:
- rawmode = MODES[(imagetype & 7, depth)]
- if imagetype & 8:
- # compressed
- self.tile = [
- (
- "tga_rle",
- (0, 0) + self.size,
- self.fp.tell(),
- (rawmode, orientation, depth),
- )
- ]
- else:
- self.tile = [
- (
- "raw",
- (0, 0) + self.size,
- self.fp.tell(),
- (rawmode, 0, orientation),
- )
- ]
- except KeyError:
- pass # cannot decode
-
- def load_end(self):
- if self._flip_horizontally:
- self.im = self.im.transpose(Image.Transpose.FLIP_LEFT_RIGHT)
-
-
-#
-# --------------------------------------------------------------------
-# Write TGA file
-
-
-SAVE = {
- "1": ("1", 1, 0, 3),
- "L": ("L", 8, 0, 3),
- "LA": ("LA", 16, 0, 3),
- "P": ("P", 8, 1, 1),
- "RGB": ("BGR", 24, 0, 2),
- "RGBA": ("BGRA", 32, 0, 2),
-}
-
-
-def _save(im, fp, filename):
- try:
- rawmode, bits, colormaptype, imagetype = SAVE[im.mode]
- except KeyError as e:
- msg = f"cannot write mode {im.mode} as TGA"
- raise OSError(msg) from e
-
- if "rle" in im.encoderinfo:
- rle = im.encoderinfo["rle"]
- else:
- compression = im.encoderinfo.get("compression", im.info.get("compression"))
- rle = compression == "tga_rle"
- if rle:
- imagetype += 8
-
- id_section = im.encoderinfo.get("id_section", im.info.get("id_section", ""))
- id_len = len(id_section)
- if id_len > 255:
- id_len = 255
- id_section = id_section[:255]
- warnings.warn("id_section has been trimmed to 255 characters")
-
- if colormaptype:
- palette = im.im.getpalette("RGB", "BGR")
- colormaplength, colormapentry = len(palette) // 3, 24
- else:
- colormaplength, colormapentry = 0, 0
-
- if im.mode in ("LA", "RGBA"):
- flags = 8
- else:
- flags = 0
-
- orientation = im.encoderinfo.get("orientation", im.info.get("orientation", -1))
- if orientation > 0:
- flags = flags | 0x20
-
- fp.write(
- o8(id_len)
- + o8(colormaptype)
- + o8(imagetype)
- + o16(0) # colormapfirst
- + o16(colormaplength)
- + o8(colormapentry)
- + o16(0)
- + o16(0)
- + o16(im.size[0])
- + o16(im.size[1])
- + o8(bits)
- + o8(flags)
- )
-
- if id_section:
- fp.write(id_section)
-
- if colormaptype:
- fp.write(palette)
-
- if rle:
- ImageFile._save(
- im, fp, [("tga_rle", (0, 0) + im.size, 0, (rawmode, orientation))]
- )
- else:
- ImageFile._save(
- im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, orientation))]
- )
-
- # write targa version 2 footer
- fp.write(b"\000" * 8 + b"TRUEVISION-XFILE." + b"\000")
-
-
-#
-# --------------------------------------------------------------------
-# Registry
-
-
-Image.register_open(TgaImageFile.format, TgaImageFile)
-Image.register_save(TgaImageFile.format, _save)
-
-Image.register_extensions(TgaImageFile.format, [".tga", ".icb", ".vda", ".vst"])
-
-Image.register_mime(TgaImageFile.format, "image/x-tga")
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-eee6fbce.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-eee6fbce.js
deleted file mode 100644
index bd7580ef007d2ced7bf38b34495c0b44f5074f00..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-eee6fbce.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as F,e as G,s as H,a9 as K,m as p,t as Y,o as B,g as j,K as k,Y as q,h as S,j as v,p as z,x as D,ab as Q,ac as R,ad as V,w as g,u as b,k as w,F as h,G as A,H as C,V as E,ae as I,Q as J,R as L}from"./index-39fce9e2.js";import{B as M}from"./Button-79f6e3bf.js";import{S as N}from"./StaticColumn-ab6a4f96.js";function O(a){let e,l,t,s,o,r,n,f,d,_;const u=a[3].default,c=K(u,a,a[2],null);return{c(){e=p("div"),l=p("span"),t=Y(a[1]),s=B(),o=p("span"),o.textContent="▼",r=B(),n=p("div"),c&&c.c(),j(l,"class","svelte-s1r2yt"),j(o,"class","icon svelte-s1r2yt"),k(o,"transform",a[0]?"rotate(0)":"rotate(90deg)"),j(e,"class","label-wrap svelte-s1r2yt"),q(e,"open",a[0]),k(n,"display",a[0]?"block":"none")},m(i,m){S(i,e,m),v(e,l),v(l,t),v(e,s),v(e,o),S(i,r,m),S(i,n,m),c&&c.m(n,null),f=!0,d||(_=z(e,"click",a[4]),d=!0)},p(i,[m]){(!f||m&2)&&D(t,i[1]),m&1&&k(o,"transform",i[0]?"rotate(0)":"rotate(90deg)"),(!f||m&1)&&q(e,"open",i[0]),c&&c.p&&(!f||m&4)&&Q(c,u,i,i[2],f?V(u,i[2],m,null):R(i[2]),null),m&1&&k(n,"display",i[0]?"block":"none")},i(i){f||(g(c,i),f=!0)},o(i){b(c,i),f=!1},d(i){i&&(w(e),w(r),w(n)),c&&c.d(i),d=!1,_()}}}function P(a,e,l){let{$$slots:t={},$$scope:s}=e,{label:o=""}=e,{open:r=!0}=e;const n=()=>l(0,r=!r);return a.$$set=f=>{"label"in f&&l(1,o=f.label),"open"in f&&l(0,r=f.open),"$$scope"in f&&l(2,s=f.$$scope)},[r,o,s,t,n]}class T extends F{constructor(e){super(),G(this,e,P,O,H,{label:1,open:0})}}function U(a){let e;const l=a[6].default,t=K(l,a,a[7],null);return{c(){t&&t.c()},m(s,o){t&&t.m(s,o),e=!0},p(s,o){t&&t.p&&(!e||o&128)&&Q(t,l,s,s[7],e?V(l,s[7],o,null):R(s[7]),null)},i(s){e||(g(t,s),e=!0)},o(s){b(t,s),e=!1},d(s){t&&t.d(s)}}}function W(a){let e,l;return e=new N({props:{$$slots:{default:[U]},$$scope:{ctx:a}}}),{c(){h(e.$$.fragment)},m(t,s){A(e,t,s),l=!0},p(t,s){const o={};s&128&&(o.$$scope={dirty:s,ctx:t}),e.$set(o)},i(t){l||(g(e.$$.fragment,t),l=!0)},o(t){b(e.$$.fragment,t),l=!1},d(t){C(e,t)}}}function X(a){let e,l,t,s;const o=[a[5]];let r={};for(let n=0;n{"label"in u&&l(0,o=u.label),"elem_id"in u&&l(1,r=u.elem_id),"elem_classes"in u&&l(2,n=u.elem_classes),"visible"in u&&l(3,f=u.visible),"open"in u&&l(4,d=u.open),"loading_status"in u&&l(5,_=u.loading_status),"$$scope"in u&&l(7,s=u.$$scope)},[o,r,n,f,d,_,t,s]}class y extends F{constructor(e){super(),G(this,e,$,Z,H,{label:0,elem_id:1,elem_classes:2,visible:3,open:4,loading_status:5})}}const se=y,le=["static"];export{se as Component,le as modes};
-//# sourceMappingURL=index-eee6fbce.js.map
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
deleted file mode 100644
index 1cbe78f0c964773ca64603b07bd5fda3d1e1ea19..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
+++ /dev/null
@@ -1,677 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import Any, Callable, Dict, List, Optional, Union
-
-import numpy as np
-import torch
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from ...loaders import TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet3DConditionModel
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-from . import TextToVideoSDPipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import TextToVideoSDPipeline
- >>> from diffusers.utils import export_to_video
-
- >>> pipe = TextToVideoSDPipeline.from_pretrained(
- ... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16"
- ... )
- >>> pipe.enable_model_cpu_offload()
-
- >>> prompt = "Spiderman is surfing"
- >>> video_frames = pipe(prompt).frames
- >>> video_path = export_to_video(video_frames)
- >>> video_path
- ```
-"""
-
-
-def tensor2vid(video: torch.Tensor, mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -> List[np.ndarray]:
- # This code is copied from https://github.com/modelscope/modelscope/blob/1509fdb973e5871f37148a4b5e5964cafd43e64d/modelscope/pipelines/multi_modal/text_to_video_synthesis_pipeline.py#L78
- # reshape to ncfhw
- mean = torch.tensor(mean, device=video.device).reshape(1, -1, 1, 1, 1)
- std = torch.tensor(std, device=video.device).reshape(1, -1, 1, 1, 1)
- # unnormalize back to [0,1]
- video = video.mul_(std).add_(mean)
- video.clamp_(0, 1)
- # prepare the final outputs
- i, c, f, h, w = video.shape
- images = video.permute(2, 3, 0, 4, 1).reshape(
- f, h, i * w, c
- ) # 1st (frames, h, batch_size, w, c) 2nd (frames, h, batch_size * w, c)
- images = images.unbind(dim=0) # prepare a list of indvidual (consecutive frames)
- images = [(image.cpu().numpy() * 255).astype("uint8") for image in images] # f h w c
- return images
-
-
-class TextToVideoSDPipeline(DiffusionPipeline, TextualInversionLoaderMixin):
- r"""
- Pipeline for text-to-video generation.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Same as Stable Diffusion 2.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet3DConditionModel`]): Conditional U-Net architecture to denoise the encoded video latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet3DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding.
-
- When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
- steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding.
-
- When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in
- several steps. This is useful to save a large amount of memory and to allow the processing of larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously invoked, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded
- to GPU only when their specific submodule has its `forward` method called. Note that offloading happens on a
- submodule basis. Memory savings are higher than with `enable_model_cpu_offload`, but performance is lower.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"):
- from accelerate import cpu_offload
- else:
- raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
- cpu_offload(cpu_offloaded_model, device)
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- @property
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- """
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = text_inputs.attention_mask.to(device)
- else:
- attention_mask = None
-
- prompt_embeds = self.text_encoder(
- text_input_ids.to(device),
- attention_mask=attention_mask,
- )
- prompt_embeds = prompt_embeds[0]
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- # textual inversion: procecss multi-vector tokens if necessary
- if isinstance(self, TextualInversionLoaderMixin):
- uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
- attention_mask = uncond_input.attention_mask.to(device)
- else:
- attention_mask = None
-
- negative_prompt_embeds = self.text_encoder(
- uncond_input.input_ids.to(device),
- attention_mask=attention_mask,
- )
- negative_prompt_embeds = negative_prompt_embeds[0]
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
-
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
-
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- def decode_latents(self, latents):
- latents = 1 / self.vae.config.scaling_factor * latents
-
- batch_size, channels, num_frames, height, width = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(batch_size * num_frames, channels, height, width)
-
- image = self.vae.decode(latents).sample
- video = (
- image[None, :]
- .reshape(
- (
- batch_size,
- num_frames,
- -1,
- )
- + image.shape[2:]
- )
- .permute(0, 2, 1, 3, 4)
- )
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- video = video.float()
- return video
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
- def check_inputs(
- self,
- prompt,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- def prepare_latents(
- self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None
- ):
- shape = (
- batch_size,
- num_channels_latents,
- num_frames,
- height // self.vae_scale_factor,
- width // self.vae_scale_factor,
- )
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_frames: int = 16,
- num_inference_steps: int = 50,
- guidance_scale: float = 9.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "np",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the video generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated video.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated video.
- num_frames (`int`, *optional*, defaults to 16):
- The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
- amounts to 2 seconds of video.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate videos that are closely linked to the text `prompt`,
- usually at the expense of lower video quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the video generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`. Latents should be of shape
- `(batch_size, num_channel, num_frames, height, width)`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"np"`):
- The output format of the generate video. Choose between `torch.FloatTensor` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.TextToVideoSDPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.TextToVideoSDPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.TextToVideoSDPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated frames.
- """
- # 0. Default height and width to unet
- height = height or self.unet.config.sample_size * self.vae_scale_factor
- width = width or self.unet.config.sample_size * self.vae_scale_factor
-
- num_images_per_prompt = 1
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- prompt_embeds = self._encode_prompt(
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- num_frames,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- ).sample
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # reshape latents
- bsz, channel, frames, width, height = latents.shape
- latents = latents.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
- noise_pred = noise_pred.permute(0, 2, 1, 3, 4).reshape(bsz * frames, channel, width, height)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # reshape latents back
- latents = latents[None, :].reshape(bsz, frames, channel, width, height).permute(0, 2, 1, 3, 4)
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- video_tensor = self.decode_latents(latents)
-
- if output_type == "pt":
- video = video_tensor
- else:
- video = tensor2vid(video_tensor)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (video,)
-
- return TextToVideoSDPipelineOutput(frames=video)
diff --git a/spaces/deepliteai/yolobench/plotting.py b/spaces/deepliteai/yolobench/plotting.py
deleted file mode 100644
index 2f2f12548bbeccf75ac218fb9fcba0ad3b6c1468..0000000000000000000000000000000000000000
--- a/spaces/deepliteai/yolobench/plotting.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import plotly.express as px
-import plotly.graph_objects as go
-
-from utils import DEEPLITE_LIGHT_BLUE_HEX, load_yolobench_data
-
-
-df, pareto_indices = load_yolobench_data()
-
-
-METRIC_NAME_MAPPING = {
- 'mAP@0.5': 'mAP_0.5',
- 'mAP@0.5:0.95': 'mAP_0.5:0.95',
- 'Precision': 'precision',
- 'Recall': 'recall',
-}
-
-METRIC_KEYS_TO_NAMES = {v: k for k, v in METRIC_NAME_MAPPING.items()}
-
-
-LATENCY_KEYS = {
- 'Raspberry Pi 4 Model B (CPU, TFLite, FP32)': 'raspi4_tflite_latency',
- 'Jetson Nano (GPU, ONNX Runtime, FP32)': 'nano_gpu_latency',
- 'Intel® Core™i7-10875H (CPU, OpenVINO, FP32)': 'openvino_latency',
- 'Khadas VIM3 (NPU, INT16)': 'vim3_latency',
- 'Orange Pi 5 (NPU, FP16)': 'orange_pi_latency',
-}
-
-LATENCY_KEYS_TO_NAMES = {v: k for k, v in LATENCY_KEYS.items()}
-
-DATASET_TAGS = {
- 'PASCAL VOC': 'voc',
- 'SKU-110K': 'sku',
- 'WIDERFACE': 'wider',
- 'COCO': 'coco',
-}
-
-DATASET_TAGS_TO_NAMES = {v: k for k, v in DATASET_TAGS.items()}
-
-
-def get_scatter_plot(
- dataset_tag,
- metric_tag,
- latency_key,
- model_family_coloring=True,
- add_pareto_frontier=False,
- plot_pareto_only=False,
- log_axis=False,
- ):
- fig_opts, layout_opts = {'opacity': 0.5, 'color_discrete_sequence': [DEEPLITE_LIGHT_BLUE_HEX]}, {}
- if model_family_coloring:
- fig_opts = {
- 'color': 'model_family',
- 'opacity': 0.75,
- 'color_discrete_sequence': px.colors.qualitative.Plotly,
- }
- layout_opts = {
- 'legend': dict(
- title='Model family (click to toggle)',
- )
- }
-
- frontier = None
- if plot_pareto_only:
- metric_key = f'{metric_tag}_{dataset_tag}'
- frontier = pareto_indices[metric_key][latency_key]
-
- fig = px.scatter(
- df if frontier is None else df.iloc[frontier, :],
- x=latency_key,
- y=f'{metric_tag}_{dataset_tag}',
- title=f'{METRIC_KEYS_TO_NAMES[metric_tag]}-latency scatter plot',
- hover_data={
- 'model_name': True,
- 'model_family': False,
- latency_key: ':.2f',
- f'{metric_tag}_{dataset_tag}': ':.2f',
- },
- labels={
- 'model_name': 'Model name',
- latency_key: 'Latency',
- f'{metric_tag}_{dataset_tag}': METRIC_KEYS_TO_NAMES[metric_tag],
- },
- template='plotly_white',
- **fig_opts,
- )
- if log_axis:
- fig.update_xaxes(type='log')
-
- fig.update_layout(
- height=600,
- modebar_remove=['lasso', 'autoscale', 'zoomin', 'zoomout', 'select2d', 'select'],
- xaxis_title=f'{LATENCY_KEYS_TO_NAMES[latency_key]} latency, ms',
- yaxis_title=f"{METRIC_KEYS_TO_NAMES[metric_tag]}",
- xaxis=dict(
- rangeslider=dict(
- visible=True,
- bgcolor=DEEPLITE_LIGHT_BLUE_HEX,
- thickness=0.02,
- ),
- ),
- yaxis=dict(
- fixedrange=False,
- ),
- hoverlabel=dict(
- # bgcolor="white",
- font_size=14,
- font_family='Source Sans Pro'
- ),
- **layout_opts,
- )
- if add_pareto_frontier:
- fig = pareto_frontier_layer(fig, dataset_tag, metric_tag, latency_key)
- return fig
-
-
-def create_yolobench_plots(
- dataset_name,
- hardware_name,
- metric_name,
- vis_options,
- table_mode,
- ):
- model_family_coloring = 'Model family' in vis_options
- add_pareto_frontier = 'Highlight Pareto' in vis_options
- plot_pareto_only = 'Show Pareto only' in vis_options
- log_axis = 'Log x-axis' in vis_options
- fig = get_scatter_plot(
- DATASET_TAGS[dataset_name],
- METRIC_NAME_MAPPING[metric_name],
- LATENCY_KEYS[hardware_name],
- model_family_coloring,
- add_pareto_frontier,
- plot_pareto_only,
- log_axis,
- )
- pareto_table = get_pareto_table(
- dataset_name, hardware_name, metric_name, expand_table='Show all' in table_mode
- )
- return fig, pareto_table
-
-
-def pareto_frontier_layer(
- fig,
- dataset_tag,
- metric_tag,
- latency_key,
- ):
- metric_key = f'{metric_tag}_{dataset_tag}'
- frontier = pareto_indices[metric_key][latency_key]
- fig.add_trace(
- go.Scatter(
- x=df.iloc[frontier, :][latency_key],
- y=df.iloc[frontier, :][metric_key],
- mode='lines',
- opacity=0.5,
- line=go.scatter.Line(color='grey'),
- showlegend=False,
- name=metric_key,
- )
- )
- return fig
-
-
-def get_pareto_table(
- dataset_name, hardware_name, metric_name, expand_table=False,
-):
- dataset_tag = DATASET_TAGS[dataset_name]
- metric_tag = METRIC_NAME_MAPPING[metric_name]
- latency_key = LATENCY_KEYS[hardware_name]
- metric_key = f'{metric_tag}_{dataset_tag}'
-
- latency_key_final = f'{LATENCY_KEYS_TO_NAMES[latency_key]} latency, ms'
- metric_key_final = METRIC_KEYS_TO_NAMES[metric_tag]
-
- frontier = pareto_indices[metric_key][latency_key]
- table_df = df.iloc[frontier, :][['model_name', metric_key, latency_key]]
- table_df['Input resolution (px)'] = table_df['model_name'].apply(lambda name: name.split('_')[-1])
- table_df['Model name'] = table_df['model_name'].apply(lambda name: name.split('_')[0])
- table_df[metric_key_final] = table_df[metric_key].apply(lambda val: round(val, 3))
- table_df[latency_key_final] = table_df[latency_key].apply(lambda val: round(val, 2))
-
- def make_clickable(url, name):
- return f'{name} '
-
-
- if dataset_name == 'COCO':
- table_df['Download link'] = table_df['model_name'].apply(
- lambda name: f'https://download.deeplite.ai/zoo/models/YOLOBench/{name.split("_")[0]}_640.pt'
- )
- table_df['Download link'] = table_df.apply(lambda x: make_clickable(x['Download link'], 'Weights download'), axis=1)
- else:
- table_df['Download link'] = table_df['model_name'].apply(lambda s: 'Coming soon')
-
-
- table_df = table_df[['Model name', 'Input resolution (px)',
- metric_key_final, latency_key_final, 'Download link']].sort_values(by=metric_key_final, ascending=False)
- if not expand_table:
- table_df = table_df.iloc[:10, :]
-
- table_df = table_df.to_html(
- classes='table',
- escape=False, render_links=True, index=False
- )
-
- return table_df
diff --git a/spaces/diacanFperku/AutoGPT/Answer Key Section 1 Reinforcement Cell Division And Mitosis.zip.md b/spaces/diacanFperku/AutoGPT/Answer Key Section 1 Reinforcement Cell Division And Mitosis.zip.md
deleted file mode 100644
index 774089643abe4cf6a930f87a002cc6c6a17eb3fc..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Answer Key Section 1 Reinforcement Cell Division And Mitosis.zip.md
+++ /dev/null
@@ -1,6 +0,0 @@
-answer key section 1 reinforcement cell division and mitosis.zip DOWNLOAD ››› https://gohhs.com/2uFTtq
-
-6-10 Bi-Orientation of Sister Chromatids syntelic monotelic merotelic (1) merotelic (2) ... The strength of attachment is governed in part by kinetochore tension. ... When incorrect attachments occur, they are corrected by a form of positive reinforcement. ... The answer probably lies in the fact that only bi-orientation results in the ... 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen.md b/spaces/diacanFperku/AutoGPT/Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen.md
deleted file mode 100644
index 372984d4d7fce4cfb6c78b371348f3b16ad2067f..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen: A Review
-
-If you are looking for a software that can help you make better decisions in business, personal or professional situations, you might want to check out Infonautic Decision Making Helper V1.20. This is a powerful tool that can analyze different options and criteria, and provide you with a clear and logical recommendation based on your preferences and goals.
-Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen Download Zip ->->->-> https://gohhs.com/2uFV2E
-
-However, if you don't want to pay for the full version of this software, you might be tempted to download Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen from the internet. This is a cracked version of the software that claims to bypass the activation process and give you access to all the features for free.
-
-But is it really worth it? In this article, we will review Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen and tell you why you should avoid it at all costs.
-
-What is Infonautic Decision Making Helper V1.20?
-
-Infonautic Decision Making Helper V1.20 is a software that can help you make better decisions by using a structured and rational approach. It can help you define your problem, identify your options and criteria, weigh them according to your preferences, and calculate the best alternative for you.
-
-The software can also generate reports and charts that can help you visualize and communicate your decision process and results. You can use it for various types of decisions, such as personal, professional, business, financial, ethical, or strategic.
-
-
-Infonautic Decision Making Helper V1.20 is compatible with Windows XP, Vista, 7, 8, and 10. It has a user-friendly interface and a comprehensive help system that can guide you through the steps of decision making.
-
-What is Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen?
-
-Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen is a cracked version of the software that claims to give you access to the full features without paying for the license. It is usually distributed through torrent sites or file-sharing platforms.
-
-The crack is supposed to bypass the activation process of the software by generating a fake serial key or patching the executable file. However, this is not only illegal but also risky for your computer and your data.
-
-Why should you avoid Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen?
-
-There are many reasons why you should avoid downloading and using Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen. Here are some of them:
-
-
-It is illegal. Downloading and using cracked software is a violation of the intellectual property rights of the software developer. You could face legal consequences if you are caught using pirated software.
-It is unethical. By using cracked software, you are depriving the software developer of their rightful income and recognition for their work. You are also undermining their efforts to improve their product and provide customer support.
-It is unsafe. Cracked software often contains malware, viruses, spyware, or other malicious code that can harm your computer and compromise your data. You could lose your files, expose your personal information, or even become a victim of identity theft or fraud.
-It is unreliable. Cracked software often has bugs, errors, or compatibility issues that can affect its performance and functionality. You could experience crashes, freezes, glitches, or corrupted files while using it. You could also miss out on updates, patches, or new features that the original software offers.
-It is ineffective. Cracked software may not work as intended or as advertised by the software developer. You could end up with inaccurate or incomplete results that could affect your decision making quality and outcomes.
-
-
-What should you do instead?
-
-If you want to use Infonautic Decision Making Helper V1.20 for your decision making needs, you should buy the original software from the official website of Infonautic (https://www.infonautics-software.ch/decisionmakinghelper/). You can choose from different license options depending on your needs and budget.
-
-By buying the original software, you will get:
-
-
-A legal and ethical product. You will respect the intellectual property rights of the software developer and support their work.
-A safe and secure product. You will avoid malware, viruses, spyware, or other malicious code that could harm your computer and data.
-A reliable and updated product. You will enjoy a bug-free, error-free, and compatible product that works as intended and as advertised. You will also get access to updates, patches, or new features that the software developer releases.
-An effective and accurate product. You will get high-quality results that can help you make better decisions in various situations.
-
-
-In conclusion, Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen is not worth downloading or using. It is illegal, unethical, unsafe, unreliable, and ineffective. You should avoid it at all costs and buy the original software instead.
-
-
-
-How much does Infonautic Decision Making Helper V1.20 cost and what are the license options?
-
-Infonautic Decision Making Helper V1.20 is a shareware software, which means you can test it for free before buying a license key. The trial version has some limitations, such as the number of options, criteria, and ratings you can enter, and the duration of the trial period.
-
-If you want to use the full version of the software, you need to buy a license key from the official website of Infonautic (https://www.infonautics-software.ch/decisionmakinghelper/). You can choose from different license options depending on your needs and budget:
-
-
-Single User License: This license allows you to use the software on one computer for one user. The price is $25 USD.
-Multi User License: This license allows you to use the software on multiple computers for multiple users within one company or organization. The price depends on the number of users and ranges from $50 USD for 2 users to $500 USD for 50 users.
-Site License: This license allows you to use the software on unlimited computers for unlimited users within one company or organization at one location. The price is $1000 USD.
-
-
-All license options include free updates and support by email in English and German languages. The license key is valid for all future versions of the software.
-
-Conclusion
-
-Infonautic Decision Making Helper V1.20 is a software that can help you make better decisions by using a structured and rational approach. It has many features and benefits that can help you define your problem, identify your options and criteria, weigh them according to your preferences, and calculate the best alternative for you. It can also generate reports and charts that can help you visualize and communicate your decision process and results.
-
-However, if you want to use this software, you should buy the original version from the official website of Infonautic and avoid downloading Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen from the internet. This is a cracked version of the software that is illegal, unethical, unsafe, unreliable, and ineffective.
-
-By buying the original software, you will get a legal, ethical, safe, reliable, and effective product that can help you make better decisions in various situations. You will also get access to updates, patches, or new features that Infonautic releases. You can choose from different license options depending on your needs and budget.
-
-We hope this article has helped you understand more about Infonautic Decision Making Helper V1.20 and why you should avoid Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen at all costs.
-Are there any alternatives or competitors to Infonautic Decision Making Helper V1.20?
-
-Infonautic Decision Making Helper V1.20 is not the only software that can help you make better decisions. There are many other tools and methods that you can use to improve your decision making skills and outcomes. Here are some of them:
-
-
-SWOT Analysis: This is a technique that can help you analyze the strengths, weaknesses, opportunities, and threats of a situation or an option. You can use it to evaluate the pros and cons of different alternatives and choose the best one for your goals.
-Decision Matrix Analysis: This is a technique that can help you compare different options based on multiple criteria and assign them scores. You can use it to rank the options and select the one with the highest score.
-Pros and Cons List: This is a simple technique that can help you list the advantages and disadvantages of each option. You can use it to weigh the benefits and costs of different alternatives and choose the one that has the most positive impact.
-Creately: This is a software that can help you create diagrams and charts for your decision making process. You can use it to visualize your options, criteria, ratings, and results in different formats, such as mind maps, flowcharts, or matrices.
-Lucidchart: This is another software that can help you create diagrams and charts for your decision making process. You can use it to draw your options, criteria, ratings, and results in different formats, such as Venn diagrams, spider diagrams, or pie charts.
-
-
-These are some of the alternatives or competitors to Infonautic Decision Making Helper V1.20 that you can use to make better decisions. However, none of them can offer you the same features and benefits as Infonautic Decision Making Helper V1.20, such as the ability to handle up to 300 criteria, 26 options, and 1000 ratings, generate reports and charts, understand and communicate your decision, and run on Windows XP and newer.
-
-Therefore, if you want to use a software that can help you make better decisions in a structured and rational way, you should buy Infonautic Decision Making Helper V1.20 from the official website of Infonautic and avoid downloading Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen from the internet.
-Conclusion
-
-In this article, we have reviewed Infonautic Decision Making Helper V1.20 and Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen. We have explained what they are, how they work, why you should avoid the cracked version, and what are the features and benefits of the original version. We have also discussed some of the alternatives and competitors to Infonautic Decision Making Helper V1.20 that you can use to make better decisions.
-
-We hope this article has helped you understand more about Infonautic Decision Making Helper V1.20 and why you should avoid Infonautic Decision Making Helper V1.20 Incl Crack-LAXiTY [TorDi Serial Key Keygen at all costs.
-
-If you want to use a software that can help you make better decisions in a structured and rational way, you should buy Infonautic Decision Making Helper V1.20 from the official website of Infonautic and enjoy its features and benefits.
-
-Thank you for reading this article and happy decision making!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/utils/parser.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/utils/parser.py
deleted file mode 100644
index 63cbf1e1f4702453ffed5561e9b2621846c13c8f..0000000000000000000000000000000000000000
--- a/spaces/diagaiwei/ir_chinese_medqa/colbert/utils/parser.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import os
-import copy
-import faiss
-
-from argparse import ArgumentParser
-
-import colbert.utils.distributed as distributed
-from colbert.utils.runs import Run
-from colbert.utils.utils import print_message, timestamp, create_directory
-
-
-class Arguments():
- def __init__(self, description):
- self.parser = ArgumentParser(description=description)
- self.checks = []
-
- self.add_argument('--root', dest='root', default='experiments')
- self.add_argument('--experiment', dest='experiment', default='dirty')
- self.add_argument('--run', dest='run', default=Run.name)
-
- self.add_argument('--local_rank', dest='rank', default=-1, type=int)
-
- def add_model_parameters(self):
- # Core Arguments
- self.add_argument('--similarity', dest='similarity', default='cosine', choices=['cosine', 'l2'])
- self.add_argument('--dim', dest='dim', default=128, type=int)
- self.add_argument('--query_maxlen', dest='query_maxlen', default=32, type=int)
- self.add_argument('--doc_maxlen', dest='doc_maxlen', default=180, type=int)
-
- # Filtering-related Arguments
- self.add_argument('--mask-punctuation', dest='mask_punctuation', default=False, action='store_true')
-
- def add_model_training_parameters(self):
- # NOTE: Providing a checkpoint is one thing, --resume is another, --resume_optimizer is yet another.
- self.add_argument('--resume', dest='resume', default=False, action='store_true')
- self.add_argument('--resume_optimizer', dest='resume_optimizer', default=False, action='store_true')
- self.add_argument('--checkpoint', dest='checkpoint', default=None, required=False)
-
- self.add_argument('--lr', dest='lr', default=3e-06, type=float)
- self.add_argument('--maxsteps', dest='maxsteps', default=400000, type=int)
- self.add_argument('--bsize', dest='bsize', default=32, type=int)
- self.add_argument('--accum', dest='accumsteps', default=2, type=int)
- self.add_argument('--amp', dest='amp', default=False, action='store_true')
-
- def add_model_inference_parameters(self):
- self.add_argument('--checkpoint', dest='checkpoint', required=True)
- self.add_argument('--bsize', dest='bsize', default=128, type=int)
- self.add_argument('--amp', dest='amp', default=False, action='store_true')
-
- def add_training_input(self):
- self.add_argument('--triples', dest='triples', required=True)
- self.add_argument('--queries', dest='queries', default=None)
- self.add_argument('--collection', dest='collection', default=None)
-
- def check_training_input(args):
- assert (args.collection is None) == (args.queries is None), \
- "For training, both (or neither) --collection and --queries must be supplied." \
- "If neither is supplied, the --triples file must contain texts (not PIDs)."
-
- self.checks.append(check_training_input)
-
- def add_ranking_input(self):
- self.add_argument('--queries', dest='queries', default=None)
- self.add_argument('--collection', dest='collection', default=None)
- self.add_argument('--qrels', dest='qrels', default=None)
-
- def add_reranking_input(self):
- self.add_ranking_input()
- self.add_argument('--topk', dest='topK', required=True)
- self.add_argument('--shortcircuit', dest='shortcircuit', default=False, action='store_true')
-
- def add_indexing_input(self):
- self.add_argument('--collection', dest='collection', required=True)
- self.add_argument('--index_root', dest='index_root', required=True)
- self.add_argument('--index_name', dest='index_name', required=True)
-
- def add_compressed_index_input(self):
- self.add_argument('--compression_level', dest='compression_level',
- choices=[1, 2], type=int, default=None)
-
-
- def add_index_use_input(self):
- self.add_argument('--index_root', dest='index_root', required=True)
- self.add_argument('--index_name', dest='index_name', required=True)
- self.add_argument('--partitions', dest='partitions', default=None, type=int, required=False)
-
- def add_retrieval_input(self):
- self.add_index_use_input()
- self.add_argument('--nprobe', dest='nprobe', default=10, type=int)
- self.add_argument('--retrieve_only', dest='retrieve_only', default=False, action='store_true')
-
- def add_argument(self, *args, **kw_args):
- return self.parser.add_argument(*args, **kw_args)
-
- def check_arguments(self, args):
- for check in self.checks:
- check(args)
-
- def parse(self):
- args = self.parser.parse_args()
- self.check_arguments(args)
-
- args.input_arguments = copy.deepcopy(args)
-
- args.nranks, args.distributed = distributed.init(args.rank)
-
- args.nthreads = int(max(os.cpu_count(), faiss.omp_get_max_threads()) * 0.8)
- args.nthreads = max(1, args.nthreads // args.nranks)
-
- if args.nranks > 1:
- print_message(f"#> Restricting number of threads for FAISS to {args.nthreads} per process",
- condition=(args.rank == 0))
- faiss.omp_set_num_threads(args.nthreads)
-
- Run.init(args.rank, args.root, args.experiment, args.run)
- Run._log_args(args)
- Run.info(args.input_arguments.__dict__, '\n')
-
- return args
diff --git a/spaces/diffusers/controlnet-canny-tool/__init__.py b/spaces/diffusers/controlnet-canny-tool/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/dilums/sentence-similarity/Dockerfile b/spaces/dilums/sentence-similarity/Dockerfile
deleted file mode 100644
index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000
--- a/spaces/dilums/sentence-similarity/Dockerfile
+++ /dev/null
@@ -1,65 +0,0 @@
-FROM node:18-alpine AS base
-
-# Install dependencies only when needed
-FROM base AS deps
-# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
-RUN apk add --no-cache libc6-compat
-WORKDIR /app
-
-# Install dependencies based on the preferred package manager
-COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
-RUN \
- if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
- elif [ -f package-lock.json ]; then npm ci; \
- elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
- else echo "Lockfile not found." && exit 1; \
- fi
-
-# Uncomment the following lines if you want to use a secret at buildtime,
-# for example to access your private npm packages
-# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \
-# $(cat /run/secrets/HF_EXAMPLE_SECRET)
-
-# Rebuild the source code only when needed
-FROM base AS builder
-WORKDIR /app
-COPY --from=deps /app/node_modules ./node_modules
-COPY . .
-
-# Next.js collects completely anonymous telemetry data about general usage.
-# Learn more here: https://nextjs.org/telemetry
-# Uncomment the following line in case you want to disable telemetry during the build.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-# RUN yarn build
-
-# If you use yarn, comment out this line and use the line above
-RUN npm run build
-
-# Production image, copy all the files and run next
-FROM base AS runner
-WORKDIR /app
-
-ENV NODE_ENV production
-# Uncomment the following line in case you want to disable telemetry during runtime.
-# ENV NEXT_TELEMETRY_DISABLED 1
-
-RUN addgroup --system --gid 1001 nodejs
-RUN adduser --system --uid 1001 nextjs
-
-COPY --from=builder /app/public ./public
-
-# Automatically leverage output traces to reduce image size
-# https://nextjs.org/docs/advanced-features/output-file-tracing
-COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
-COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
-COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache
-# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache
-
-USER nextjs
-
-EXPOSE 3000
-
-ENV PORT 3000
-
-CMD ["node", "server.js"]
\ No newline at end of file
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/master.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/master.py
deleted file mode 100644
index 39eaef248e132f7ccd6675b63ba21ef41e350c3b..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/master.py
+++ /dev/null
@@ -1,61 +0,0 @@
-label_convertor = dict(
- type='AttnConvertor', dict_type='DICT90', with_unknown=True)
-
-model = dict(
- type='MASTER',
- backbone=dict(
- type='ResNet',
- in_channels=3,
- stem_channels=[64, 128],
- block_cfgs=dict(
- type='BasicBlock',
- plugins=dict(
- cfg=dict(
- type='GCAModule',
- ratio=0.0625,
- n_head=1,
- pooling_type='att',
- is_att_scale=False,
- fusion_type='channel_add'),
- position='after_conv2')),
- arch_layers=[1, 2, 5, 3],
- arch_channels=[256, 256, 512, 512],
- strides=[1, 1, 1, 1],
- plugins=[
- dict(
- cfg=dict(type='Maxpool2d', kernel_size=2, stride=(2, 2)),
- stages=(True, True, False, False),
- position='before_stage'),
- dict(
- cfg=dict(type='Maxpool2d', kernel_size=(2, 1), stride=(2, 1)),
- stages=(False, False, True, False),
- position='before_stage'),
- dict(
- cfg=dict(
- type='ConvModule',
- kernel_size=3,
- stride=1,
- padding=1,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU')),
- stages=(True, True, True, True),
- position='after_stage')
- ],
- init_cfg=[
- dict(type='Kaiming', layer='Conv2d'),
- dict(type='Constant', val=1, layer='BatchNorm2d'),
- ]),
- encoder=None,
- decoder=dict(
- type='MasterDecoder',
- d_model=512,
- n_head=8,
- attn_drop=0.,
- ffn_drop=0.,
- d_inner=2048,
- n_layers=3,
- feat_pe_drop=0.2,
- feat_size=6 * 40),
- loss=dict(type='TFLoss', reduction='mean'),
- label_convertor=label_convertor,
- max_seq_len=30)
diff --git a/spaces/dirge/voicevox/voicevox_engine/part_of_speech_data.py b/spaces/dirge/voicevox/voicevox_engine/part_of_speech_data.py
deleted file mode 100644
index 8950e47c8b1cc50f7cdd3f67c857be8baf59c321..0000000000000000000000000000000000000000
--- a/spaces/dirge/voicevox/voicevox_engine/part_of_speech_data.py
+++ /dev/null
@@ -1,144 +0,0 @@
-from typing import Dict
-
-from .model import (
- USER_DICT_MAX_PRIORITY,
- USER_DICT_MIN_PRIORITY,
- PartOfSpeechDetail,
- WordTypes,
-)
-
-MIN_PRIORITY = USER_DICT_MIN_PRIORITY
-MAX_PRIORITY = USER_DICT_MAX_PRIORITY
-
-part_of_speech_data: Dict[WordTypes, PartOfSpeechDetail] = {
- WordTypes.PROPER_NOUN: PartOfSpeechDetail(
- part_of_speech="名詞",
- part_of_speech_detail_1="固有名詞",
- part_of_speech_detail_2="一般",
- part_of_speech_detail_3="*",
- context_id=1348,
- cost_candidates=[
- -988,
- 3488,
- 4768,
- 6048,
- 7328,
- 8609,
- 8734,
- 8859,
- 8984,
- 9110,
- 14176,
- ],
- accent_associative_rules=[
- "*",
- "C1",
- "C2",
- "C3",
- "C4",
- "C5",
- ],
- ),
- WordTypes.COMMON_NOUN: PartOfSpeechDetail(
- part_of_speech="名詞",
- part_of_speech_detail_1="一般",
- part_of_speech_detail_2="*",
- part_of_speech_detail_3="*",
- context_id=1345,
- cost_candidates=[
- -4445,
- 49,
- 1473,
- 2897,
- 4321,
- 5746,
- 6554,
- 7362,
- 8170,
- 8979,
- 15001,
- ],
- accent_associative_rules=[
- "*",
- "C1",
- "C2",
- "C3",
- "C4",
- "C5",
- ],
- ),
- WordTypes.VERB: PartOfSpeechDetail(
- part_of_speech="動詞",
- part_of_speech_detail_1="自立",
- part_of_speech_detail_2="*",
- part_of_speech_detail_3="*",
- context_id=642,
- cost_candidates=[
- 3100,
- 6160,
- 6360,
- 6561,
- 6761,
- 6962,
- 7414,
- 7866,
- 8318,
- 8771,
- 13433,
- ],
- accent_associative_rules=[
- "*",
- ],
- ),
- WordTypes.ADJECTIVE: PartOfSpeechDetail(
- part_of_speech="形容詞",
- part_of_speech_detail_1="自立",
- part_of_speech_detail_2="*",
- part_of_speech_detail_3="*",
- context_id=20,
- cost_candidates=[
- 1527,
- 3266,
- 3561,
- 3857,
- 4153,
- 4449,
- 5149,
- 5849,
- 6549,
- 7250,
- 10001,
- ],
- accent_associative_rules=[
- "*",
- ],
- ),
- WordTypes.SUFFIX: PartOfSpeechDetail(
- part_of_speech="名詞",
- part_of_speech_detail_1="接尾",
- part_of_speech_detail_2="一般",
- part_of_speech_detail_3="*",
- context_id=1358,
- cost_candidates=[
- 4399,
- 5373,
- 6041,
- 6710,
- 7378,
- 8047,
- 9440,
- 10834,
- 12228,
- 13622,
- 15847,
- ],
- accent_associative_rules=[
- "*",
- "C1",
- "C2",
- "C3",
- "C4",
- "C5",
- ],
- ),
-}
diff --git a/spaces/dmeck/RVC-Speakers/vits/modules/__init__.py b/spaces/dmeck/RVC-Speakers/vits/modules/__init__.py
deleted file mode 100644
index 3b947bd657e268fac3d733746d6ba9c12c458f8f..0000000000000000000000000000000000000000
--- a/spaces/dmeck/RVC-Speakers/vits/modules/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from vits.modules.layer import *
-from vits.modules.attentions import *
-from vits.modules.commons import *
-from vits.modules.transforms import *
diff --git a/spaces/doevent/3D_Photo_Inpainting/README.md b/spaces/doevent/3D_Photo_Inpainting/README.md
deleted file mode 100644
index f68a8fc3cde99e99720b2666cba7135b45a6920d..0000000000000000000000000000000000000000
--- a/spaces/doevent/3D_Photo_Inpainting/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 3D_Photo_Inpainting
-emoji: 👁
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.1.4
-python_version: 3.8.3
-app_file: app.py
-pinned: false
----
-
-# Configuration
diff --git a/spaces/dorischeng/textgenerator/app.py b/spaces/dorischeng/textgenerator/app.py
deleted file mode 100644
index 0bce717ceb59243d726534de090c0a9d1a45b0da..0000000000000000000000000000000000000000
--- a/spaces/dorischeng/textgenerator/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/EleutherAI/gpt-j-6B",title="my first generation",description="Input text and submit.").launch()
\ No newline at end of file
diff --git a/spaces/ennet/ChatDev/camel/agents/base.py b/spaces/ennet/ChatDev/camel/agents/base.py
deleted file mode 100644
index 5f46beb1946b786dcf741a75b7fff567e042b369..0000000000000000000000000000000000000000
--- a/spaces/ennet/ChatDev/camel/agents/base.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-# Licensed under the Apache License, Version 2.0 (the “License”);
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an “AS IS” BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========
-from abc import ABC, abstractmethod
-
-
-class BaseAgent(ABC):
- r"""An abstract base class for all CAMEL agents."""
-
- @abstractmethod
- def reset(self) -> None:
- r"""Resets the agent to its initial state."""
- pass
-
- @abstractmethod
- def step(self) -> None:
- r"""Performs a single step of the agent."""
- pass
diff --git a/spaces/enzostvs/hair-colour/components/form/main_color.tsx b/spaces/enzostvs/hair-colour/components/form/main_color.tsx
deleted file mode 100644
index 858206d836561a7a5e05ff390b8717d184f266e5..0000000000000000000000000000000000000000
--- a/spaces/enzostvs/hair-colour/components/form/main_color.tsx
+++ /dev/null
@@ -1,36 +0,0 @@
-import { useMemo } from "react";
-
-import { ResultsInterface } from "./results";
-
-export const MainColor = ({ result }: { result: ResultsInterface }) => {
- const mainColor = useMemo(() => {
- switch (result.label) {
- case "black hair":
- return "#000000";
- case "blond hair":
- return "#ffcc00";
- case "brown hair":
- return "#663300";
- case "white hair":
- return "#cccccc";
- case "red hair":
- return "#ff6600";
- case "completely bald":
- return "transparent";
- }
- }, [result]);
-
- return (
-
- );
-};
diff --git a/spaces/evaluate-measurement/perplexity/README.md b/spaces/evaluate-measurement/perplexity/README.md
deleted file mode 100644
index 276a6c43a7e808d53be7cd9f57295a5d97d277f0..0000000000000000000000000000000000000000
--- a/spaces/evaluate-measurement/perplexity/README.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-title: Perplexity
-emoji: 🤗
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.2
-app_file: app.py
-pinned: false
-tags:
-- evaluate
-- measurement
-description: >-
- Perplexity (PPL) can be used to evaluate the extent to which a dataset is similar to the distribution of text that a given model was trained on.
- It is defined as the exponentiated average negative log-likelihood of a sequence, calculated with exponent base `e`.
-
- For more information on perplexity, see [this tutorial](https://huggingface.co/docs/transformers/perplexity).
----
-
-# Measurement Card for Perplexity
-
-## Measurement Description
-Given a model and an input text sequence, perplexity measures how likely the model is to generate the input text sequence.
-
-As a measurement, it can be used to evaluate how well text matches the distribution of text that the input model was trained on.
-In this case, `model_id` should be the trained model, and `data` should be the text to be evaluated.
-
-This implementation of perplexity is calculated with log base `e`, as in `perplexity = e**(sum(losses) / num_tokenized_tokens)`, following recent convention in deep learning frameworks.
-
-## Intended Uses
-Dataset analysis or exploration.
-
-## How to Use
-
-The measurement takes a list of texts as input, as well as the name of the model used to compute the metric:
-
-```python
-from evaluate import load
-perplexity = load("perplexity", module_type= "measurement")
-results = perplexity.compute(data=input_texts, model_id='gpt2')
-```
-
-### Inputs
-- **model_id** (str): model used for calculating Perplexity. NOTE: Perplexity can only be calculated for causal language models.
- - This includes models such as gpt2, causal variations of bert, causal versions of t5, and more (the full list can be found in the AutoModelForCausalLM documentation here: https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForCausalLM )
-- **data** (list of str): input text, where each separate text snippet is one list entry.
-- **batch_size** (int): the batch size to run texts through the model. Defaults to 16.
-- **add_start_token** (bool): whether to add the start token to the texts, so the perplexity can include the probability of the first word. Defaults to True.
-- **device** (str): device to run on, defaults to `cuda` when available
-
-### Output Values
-This metric outputs a dictionary with the perplexity scores for the text input in the list, and the average perplexity.
-If one of the input texts is longer than the max input length of the model, then it is truncated to the max length for the perplexity computation.
-
-```
-{'perplexities': [8.182524681091309, 33.42122268676758, 27.012239456176758], 'mean_perplexity': 22.871995608011883}
-```
-
-The range of this metric is [0, inf). A lower score is better.
-
-#### Values from Popular Papers
-
-
-### Examples
-Calculating perplexity on input_texts defined here:
-```python
-perplexity = evaluate.load("perplexity", module_type="measurement")
-input_texts = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
-results = perplexity.compute(model_id='gpt2',
- add_start_token=False,
- data=input_texts)
-print(list(results.keys()))
->>>['perplexities', 'mean_perplexity']
-print(round(results["mean_perplexity"], 2))
->>>646.75
-print(round(results["perplexities"][0], 2))
->>>32.25
-```
-Calculating perplexity on input_texts loaded in from a dataset:
-```python
-perplexity = evaluate.load("perplexity", module_type= "measurement")
-input_texts = datasets.load_dataset("wikitext",
- "wikitext-2-raw-v1",
- split="test")["text"][:50]
-input_texts = [s for s in input_texts if s!='']
-results = perplexity.compute(model_id='gpt2',
- data=input_texts)
-print(list(results.keys()))
->>>['perplexities', 'mean_perplexity']
-print(round(results["mean_perplexity"], 2))
->>>576.76
-print(round(results["perplexities"][0], 2))
->>>889.28
-```
-
-## Limitations and Bias
-Note that the output value is based heavily on what text the model was trained on. This means that perplexity scores are not comparable between models or datasets.
-
-
-## Citation
-
-```bibtex
-@article{jelinek1977perplexity,
-title={Perplexity—a measure of the difficulty of speech recognition tasks},
-author={Jelinek, Fred and Mercer, Robert L and Bahl, Lalit R and Baker, James K},
-journal={The Journal of the Acoustical Society of America},
-volume={62},
-number={S1},
-pages={S63--S63},
-year={1977},
-publisher={Acoustical Society of America}
-}
-```
-
-## Further References
-- [Hugging Face Perplexity Blog Post](https://huggingface.co/docs/transformers/perplexity)
diff --git a/spaces/facebook/MusicGen/audiocraft/utils/utils.py b/spaces/facebook/MusicGen/audiocraft/utils/utils.py
deleted file mode 100644
index 2c5799f8bc4ee07dd8d60d6afe67fbc5a6039215..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/audiocraft/utils/utils.py
+++ /dev/null
@@ -1,298 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from concurrent.futures import ProcessPoolExecutor
-from contextlib import contextmanager
-from functools import wraps, lru_cache
-import hashlib
-import json
-import logging
-from pathlib import Path
-import typing as tp
-
-import flashy
-import flashy.distrib
-import omegaconf
-import torch
-from torch.nn.utils.rnn import pad_sequence
-
-
-logger = logging.getLogger(__name__)
-
-
-def model_hash(model: torch.nn.Module) -> str:
- """Return a model hash. This should allow us to track regressions in model init
- from the logs of past experiments.
- """
- hasher = hashlib.sha1()
- for p in model.parameters():
- hasher.update(p.data.cpu().numpy().tobytes())
- return hasher.hexdigest()
-
-
-def dict_from_config(cfg: omegaconf.DictConfig) -> dict:
- """Convenience function to map an omegaconf configuration to a dictionary.
-
- Args:
- cfg (omegaconf.DictConfig): Original configuration to map to dict.
- Returns:
- dict: Config as dictionary object.
- """
- dct = omegaconf.OmegaConf.to_container(cfg, resolve=True)
- assert isinstance(dct, dict)
- return dct
-
-
-def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset:
- if max_samples >= len(dataset):
- return dataset
-
- generator = torch.Generator().manual_seed(seed)
- perm = torch.randperm(len(dataset), generator=generator)
- return torch.utils.data.Subset(dataset, perm[:max_samples].tolist())
-
-
-def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int,
- num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader:
- """Convenience function to load dataset into a dataloader with optional subset sampling.
-
- Args:
- dataset: Dataset to load.
- num_samples (Optional[int]): Number of samples to limit subset size.
- batch_size (int): Batch size.
- num_workers (int): Number of workers for data loading.
- seed (int): Random seed.
- """
- if num_samples is not None:
- dataset = random_subset(dataset, num_samples, seed)
-
- dataloader = flashy.distrib.loader(
- dataset,
- batch_size=batch_size,
- num_workers=num_workers,
- **kwargs
- )
- return dataloader
-
-
-def get_dataset_from_loader(dataloader):
- dataset = dataloader.dataset
- if isinstance(dataset, torch.utils.data.Subset):
- return dataset.dataset
- else:
- return dataset
-
-
-def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None):
- """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension.
-
- Args:
- input (torch.Tensor): The input tensor containing probabilities.
- num_samples (int): Number of samples to draw.
- replacement (bool): Whether to draw with replacement or not.
- Keywords args:
- generator (torch.Generator): A pseudorandom number generator for sampling.
- Returns:
- torch.Tensor: Last dimension contains num_samples indices
- sampled from the multinomial probability distribution
- located in the last dimension of tensor input.
- """
- input_ = input.reshape(-1, input.shape[-1])
- output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator)
- output = output_.reshape(*list(input.shape[:-1]), -1)
- return output
-
-
-def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor:
- """Sample next token from top K values along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- k (int): The k in “top-k”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- top_k_value, _ = torch.topk(probs, k, dim=-1)
- min_value_top_k = top_k_value[..., [-1]]
- probs *= (probs >= min_value_top_k).float()
- probs.div_(probs.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs, num_samples=1)
- return next_token
-
-
-def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor:
- """Sample next token from top P probabilities along the last dimension of the input probs tensor.
-
- Args:
- probs (torch.Tensor): Input probabilities with token candidates on the last dimension.
- p (int): The p in “top-p”.
- Returns:
- torch.Tensor: Sampled tokens.
- """
- probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)
- probs_sum = torch.cumsum(probs_sort, dim=-1)
- mask = probs_sum - probs_sort > p
- probs_sort *= (~mask).float()
- probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True))
- next_token = multinomial(probs_sort, num_samples=1)
- next_token = torch.gather(probs_idx, -1, next_token)
- return next_token
-
-
-class DummyPoolExecutor:
- """Dummy pool executor to use when we actually have only 1 worker.
- (e.g. instead of ProcessPoolExecutor).
- """
- class DummyResult:
- def __init__(self, func, *args, **kwargs):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- def result(self):
- return self.func(*self.args, **self.kwargs)
-
- def __init__(self, workers, mp_context=None):
- pass
-
- def submit(self, func, *args, **kwargs):
- return DummyPoolExecutor.DummyResult(func, *args, **kwargs)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- return
-
-
-def get_pool_executor(num_workers: int, mp_context=None):
- return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1)
-
-
-def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor:
- """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences).
- For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]]
-
- Args:
- lengths (torch.Tensor): tensor with lengths
- max_len (int): can set the max length manually. Defaults to None.
- Returns:
- torch.Tensor: mask with 0s where there is pad tokens else 1s
- """
- assert len(lengths.shape) == 1, "Length shape should be 1 dimensional."
- final_length = lengths.max().item() if not max_len else max_len
- final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor
- return torch.arange(final_length, device=lengths.device)[None, :] < lengths[:, None]
-
-
-def hash_trick(word: str, vocab_size: int) -> int:
- """Hash trick to pair each word with an index
-
- Args:
- word (str): word we wish to convert to an index
- vocab_size (int): size of the vocabulary
- Returns:
- int: index of the word in the embedding LUT
- """
- hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16)
- return hash % vocab_size
-
-
-def with_rank_rng(base_seed: int = 1234):
- """Decorator for a function so that the function will use a Random Number Generator
- whose state depend on the GPU rank. The original RNG state is restored upon returning.
-
- Args:
- base_seed (int): Random seed.
- """
- def _decorator(fun: tp.Callable):
- @wraps(fun)
- def _decorated(*args, **kwargs):
- state = torch.get_rng_state()
- seed = base_seed ^ flashy.distrib.rank()
- torch.manual_seed(seed)
- logger.debug('Rank dependent seed set to %d', seed)
- try:
- return fun(*args, **kwargs)
- finally:
- torch.set_rng_state(state)
- logger.debug('RNG state restored.')
- return _decorated
- return _decorator
-
-
-def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]:
- """Get a list of tensors and collate them to a single tensor. according to the following logic:
- - `dim` specifies the time dimension which will be stacked and padded.
- - The output will contain 1 new dimension (dimension index 0) which will be the size of
- of the original list.
-
- Args:
- tensors (tp.List[torch.Tensor]): List of tensors to collate.
- dim (int): Dimension which will be stacked and padded.
- Returns:
- tp.Tuple[torch.Tensor, torch.Tensor]:
- torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension
- (dimension index 0) which will be the size of the original list.
- torch.Tensor: Tensor containing length of original tensor sizes (without padding).
- """
- tensors = [x.transpose(0, dim) for x in tensors]
- lens = torch.LongTensor([len(x) for x in tensors])
- padded_tensors = pad_sequence(tensors)
- padded_tensors = padded_tensors.transpose(0, 1)
- padded_tensors = padded_tensors.transpose(1, dim + 1)
- return padded_tensors, lens
-
-
-# TODO: Move to flashy?
-def copy_state(state: tp.Any, device: tp.Union[torch.device, str] = 'cpu',
- dtype: tp.Optional[torch.dtype] = None) -> tp.Any:
- if isinstance(state, torch.Tensor):
- if dtype is None or not state.is_floating_point():
- dtype = state.dtype
- return state.detach().to(device=device, dtype=dtype, copy=True)
- elif isinstance(state, dict):
- return {k: copy_state(v, device, dtype) for k, v in state.items()}
- elif isinstance(state, list):
- return [copy_state(v, device, dtype) for v in state]
-
-
-# TODO: Move to flashy?
-@contextmanager
-def swap_state(model, state, **kwargs):
- old_state = copy_state(model.state_dict())
- model.load_state_dict(state, **kwargs)
- try:
- yield
- finally:
- model.load_state_dict(old_state)
-
-
-@lru_cache(None)
-def warn_once(logger, msg):
- """Warn about a given message only once."""
- logger.warning(msg)
-
-
-def is_jsonable(x: tp.Any):
- """Check if an object can be serialized into a json:"""
- try:
- json.dumps(x)
- return True
- except (TypeError, OverflowError):
- return False
-
-
-def load_clap_state_dict(clap_model, path: tp.Union[str, Path]):
- """Wrapper around state dict loading of CLAP model
- addressing compatibility issues between CLAP and AudioCraft
- HuggingFace transformer version.
- See: https://github.com/LAION-AI/CLAP/issues/118
- """
- from clap_module.factory import load_state_dict # type: ignore
- pkg = load_state_dict(path)
- pkg.pop('text_branch.embeddings.position_ids', None)
- clap_model.model.load_state_dict(pkg)
diff --git a/spaces/fatiXbelha/sd/Bowmasters - A Hotsy-Totsy Action Game with Rag-Doll Physics.md b/spaces/fatiXbelha/sd/Bowmasters - A Hotsy-Totsy Action Game with Rag-Doll Physics.md
deleted file mode 100644
index e68f5d08d91875c9347734c832918081e1c59680..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Bowmasters - A Hotsy-Totsy Action Game with Rag-Doll Physics.md
+++ /dev/null
@@ -1,162 +0,0 @@
-
-Bowmasters Game Download for Android: A Fun and Addictive Multiplayer Game with Bowmen
- If you are looking for a fun and addictive multiplayer game with bowmen, you should try Bowmasters. This game is a world-famous action, aim and shoot game that has millions of fans and downloads worldwide. In this article, we will tell you everything you need to know about Bowmasters, including what it is, how to play it, why you should download it for Android, and how to download it. Read on and discover why Bowmasters is one of the best games you can play on your Android device.
-bowmasters game download for android Download >>> https://urllie.com/2uNA7i
- What is Bowmasters?
- Bowmasters is a game developed by Playgendary Limited, a company that specializes in creating casual and arcade games for mobile devices. Bowmasters was released in 2016 and has since become one of the most popular games on the Google Play Store, with over 50 million downloads and 4.6 stars out of 5.
- A brief introduction to the game and its features
- Bowmasters is a game that lets you play as one of the 60+ insane characters from all dimensions, each with their own unique weapon and style. You can choose from pirates, ninjas, zombies, superheroes, celebrities, animals, and more. You can also unlock new characters and weapons by playing the game or watching ads.
- The game has multiple game modes that you can enjoy, such as:
-
-Duels: You can challenge your friends or other players online in a one-on-one battle to see who is the best bowman.
-Shooting Range: You can practice your aim and shoot at different targets, such as birds, fruits, bottles, etc.
-Apple Shooting: You can test your accuracy and nerve by shooting an apple off someone's head.
-Events: You can participate in special events that offer different rewards and challenges.
-
- The game also features awesome fatalities with rag-doll physics, which means that you can see your enemies or yourself fly in the air, bleed, explode, or lose limbs when hit by a weapon. The game has a lot of humor and gore, which makes it more fun and exciting.
- How to play Bowmasters?
- The basic gameplay mechanics and controls
- The gameplay of Bowmasters is simple and intuitive. You just have to aim your weapon at your opponent by dragging your finger on the screen. You can see the angle and power of your shot on the top left corner of the screen. You can also adjust your aim by moving your finger up or down. When you are ready, release your finger to shoot.
- The goal is to hit your opponent before they hit you. You have a health bar on the top right corner of the screen that shows how much damage you have taken. If your health bar reaches zero, you lose. If your opponent's health bar reaches zero, you win.
- You can also use special abilities or
The different game modes and challenges
- As mentioned before, Bowmasters has multiple game modes that you can play and enjoy. Each game mode has its own rules and objectives, which makes the game more diverse and interesting. Here are some of the game modes that you can try:
-
-Duels: This is the main game mode where you can challenge other players online in a one-on-one battle. You can choose to play with a random opponent or invite a friend to play with you. You can also choose the difficulty level of your opponent, from easy to impossible. The winner of the duel gets coins and trophies, which can be used to unlock new characters and weapons.
-Shooting Range: This is a game mode where you can practice your aim and shoot at different targets, such as birds, fruits, bottles, etc. You can earn coins and gems by hitting the targets, which can be used to buy chests that contain new characters and weapons. You can also compete with other players on the leaderboard and see who has the best score.
-Apple Shooting: This is a game mode where you can test your accuracy and nerve by shooting an apple off someone's head. You can choose from different characters to be the shooter or the target, such as Robin Hood, William Tell, or even Donald Trump. You can also adjust the distance and wind speed to make it more challenging. If you hit the apple, you get coins and gems. If you miss, you may hit the person and cause a bloody mess.
-Events: These are special game modes that are available for a limited time and offer different rewards and challenges. For example, there is an event called Zombie Invasion, where you have to shoot zombies with different weapons and avoid getting bitten. There is also an event called Halloween Party, where you have to shoot pumpkins and candy with spooky characters and weapons.
-
- The various characters and weapons to choose from
- One of the most fun and exciting aspects of Bowmasters is the variety of characters and weapons that you can choose from. There are over 60 characters that you can play as, each with their own unique weapon and style. Some of the characters are based on real-life people or fictional characters, such as:
-bowmasters game free download for android
-bowmasters game apk download for android
-bowmasters game mod apk download for android
-bowmasters game hack download for android
-bowmasters game offline download for android
-bowmasters game latest version download for android
-bowmasters game play online for android
-bowmasters game install for android
-bowmasters game review for android
-bowmasters game tips and tricks for android
-bowmasters game cheats and codes for android
-bowmasters game characters unlock for android
-bowmasters game multiplayer mode for android
-bowmasters game best character for android
-bowmasters game weapons list for android
-bowmasters game achievements guide for android
-bowmasters game how to play for android
-bowmasters game how to aim for android
-bowmasters game how to win for android
-bowmasters game how to get coins for android
-bowmasters game how to get gems for android
-bowmasters game how to get free chest for android
-bowmasters game how to get vip for android
-bowmasters game how to get super blood mode for android
-bowmasters game how to get ultimate characters for android
-ultimate bowmasters game download for android
-ultimate bowmasters game apk download for android
-ultimate bowmasters game mod apk download for android
-ultimate bowmasters game hack download for android
-ultimate bowmasters game offline download for android
-ultimate bowmasters game latest version download for android
-ultimate bowmasters game play online for android
-ultimate bowmasters game install for android
-ultimate bowmasters game review for android
-ultimate bowmasters game tips and tricks for android
-ultimate bowmasters game cheats and codes for android
-ultimate bowmasters game characters unlock for android
-ultimate bowmasters game multiplayer mode for android
-ultimate bowmasters game best character for android
-ultimate bowmasters game weapons list for android
-ultimate bowmasters game achievements guide for android
-ultimate bowmasters game how to play for android
-ultimate bowmasters game how to aim for android
-ultimate bowmasters game how to win for android
-ultimate bowmasters game how to get coins for android
-ultimate bowmasters game how to get gems for android
-ultimate bowmasters game how to get free chest for android
-ultimate bowmasters game how to get vip for android
-ultimate bowmasters game how to get super blood mode for android
-
-Arnold: A muscular action hero who shoots rockets.
-Lara: A tomb raider who shoots arrows.
-Thor: A Norse god who throws his hammer.
-Walter White: A chemistry teacher who throws meth bombs.
-Deadpool: A sarcastic superhero who shoots guns.
-Shrek: An ogre who throws donkeys.
-
- Some of the characters are original creations or parodies, such as:
-
-Mime: A silent performer who throws invisible balls.
-Clown: A creepy entertainer who throws pies.
-Penguin: A cute animal who slides on ice.
-Neko: A cat girl who throws fish.
-Hipster: A trendy guy who throws coffee cups.
-Unicorn: A magical creature who shoots rainbows.
-
- You can unlock new characters and weapons by playing the game or watching ads. You can also buy chests that contain random characters and weapons with coins or gems. You can also upgrade your characters and weapons by spending coins or gems, which will increase their damage and range.
- Why should you download Bowmasters for Android?
- Bowmasters is a game that you should definitely download for your Android device if you are looking for a fun and addictive multiplayer game with bowmen. Here are some of the reasons why:
- The benefits of playing Bowmasters on your Android device
- It's free and easy to install
- Bowmasters is a free game that you can download from the Google Play Store or other sources (more on that later). It only takes a few minutes to install and does not require any registration or login. You can start playing right away without any hassle.
- It's compatible with most Android devices and versions
- Bowmasters is a game that works well on most Android devices and versions. It does not require a lot of storage space or memory to run smoothly. It also has low battery consumption and does not overheat your device. You can play Bowmasters on your Android phone or tablet without any problem.
- It's fun and engaging for all ages and skill levels
- Bowmasters is a game that anyone can enjoy, regardless of their age or skill level. It has simple and intuitive controls that are easy to learn but hard to master. It has colorful and cartoonish graphics that appeal to both children and adults. It has a lot of humor and gore that make the game more fun and exciting. It has multiple game modes and challenges that keep the game fresh and interesting. It has a lot of characters and weapons that you can customize and upgrade to suit your preference and style. It has a competitive and cooperative multiplayer mode that lets you play with your friends or other players online.
- The drawbacks of playing Bowmasters on your Android device
- Of course, no game is perfect, and Bowmasters also has some drawbacks that you should be aware of before downloading it for your Android device. Here are some of them:
- It contains ads and in-app purchases
- Bowmasters is a free game, but it also contains ads and in-app purchases that may affect your gaming experience. The ads may pop up randomly or after every match, which can be annoying or distracting. The in-app purchases may tempt you to spend real money to buy coins, gems, chests, or premium characters and weapons, which can be expensive or unfair. You can disable the ads by turning off your internet connection or paying a small fee, but you cannot disable the in-app purchases.
- It may crash or lag on some devices or situations
- Bowmasters is a game that works well on most Android devices and versions, but it may also crash or lag on some devices or situations. This may happen due to various reasons, such as low storage space, low memory, low battery, high temperature, incompatible device or version, corrupted file, etc. If this happens, you may lose your progress or have a poor gaming experience. You can try to fix this by clearing your cache, restarting your device, updating your device or app, reinstalling the app, etc.
- It may be too violent or gory for some players or parents
- Bowmasters is a game that has a lot of humor and gore, which make the game more fun and exciting. However, it may also be too violent or gory for some players or parents who are sensitive to blood, violence, death, etc. The game shows your enemies or yourself fly in the air, bleed, explode, or lose limbs when hit by a weapon. The game also has some characters or weapons that are based on real-life people or fictional characters who may be controversial or offensive to some people. If you are one of them, you may want to avoid playing this game or playing it with children.
- How to download Bowmasters for Android?
- If you are convinced that Bowmasters is a game that you want to play on your Android device, you may wonder how to download it. There are two ways to download Bowmasters for Android: from the Google Play Store or from other sources (APK files). Here are the steps to do both:
- The steps to download and install Bowmasters from the Google Play Store
- The Google Play Store is the official and safest way to download and install Bowmasters for your Android device. Here are the steps to do it:
-
-Open the Google Play Store app on your Android device.
-Search for "Bowmasters" in the search bar.
-Select the app with the icon of a red bowman with a blue background.
-Tap on "Install" and wait for the app to download and install on your device.
-Tap on "Open" to launch the app and start playing.
-
- The steps to download and install Bowmasters from other sources (APK files)
- APK files are files that contain the installation package of an Android app. You can download APK files from other sources than the Google Play Store, such as websites, forums, etc. However, this method is not recommended because it may expose your device to viruses, malware, spyware, etc. If you still want to try this method, here are the steps to do it:
-
-Find a reliable website that offers APK files of Bowmasters. You can search for "Bowmasters APK" on Google or other search engines.
-Download the APK file of Bowmasters from the website to your Android device.
-Enable "Unknown Sources" on your Android device by going to Settings > Security > Unknown Sources and toggling it on.
-Locate the APK file of Bowmasters on your Android device using a file manager app.
-Tap on the APK file and follow the instructions to install it on your device.
-Tap on "Open" to launch the app and start playing.
-
- The tips to optimize your Bowmasters experience on your Android device
- To make sure that you have the best gaming experience with Bowmasters on your Android device, here are some tips that you can follow:
-
-Make sure that your device has enough storage space and memory to run the game smoothly. You can check this by going to Settings > Storage and Settings > Memory on your device.
-Make sure that your device has a good internet connection to play the game online. You can check this by going to Settings > Network and Internet on your device.
-Make sure that your device has a good battery level and temperature to avoid overheating or shutting down. You can check this by going to Settings > Battery and Settings > Device Care on your device.
-Make sure that your device has the latest version of the game and the Android system. You can check this by going to the Google Play Store app and Settings > System Update on your device.
-Make sure that you have fun and enjoy the game. You can do this by playing with your friends, trying new characters and weapons, participating in events, etc.
-
- Conclusion
- Bowmasters is a fun and addictive multiplayer game with bowmen that you can play on your Android device. It has simple and intuitive controls, colorful and cartoonish graphics, multiple game modes and challenges, various characters and weapons, awesome fatalities with rag-doll physics, and a competitive and cooperative multiplayer mode. It is free and easy to download and install from the Google Play Store or other sources (APK files), but it also contains ads and in-app purchases that may affect your gaming experience. It is compatible with most Android devices and versions, but it may also crash or lag on some devices or situations. It is fun and engaging for all ages and skill levels, but it may also be too violent or gory for some players or parents. If you are looking for a game that will keep you entertained for hours, you should download Bowmasters for Android today.
- Here are some FAQs that you may have about Bowmasters:
- FAQs
-
-Q: How can I get more coins and gems in Bowmasters?
-A: You can get more coins and gems in Bowmasters by playing the game, winning duels, hitting targets, participating in events, watching ads, opening chests, or buying them with real money.
-Q: How can I unlock new characters and weapons in Bowmasters?
-A: You can unlock new characters and weapons in Bowmasters by playing the game, earning trophies, opening chests, or buying them with coins or gems.
-Q: How can I upgrade my characters and weapons in Bowmasters?
-A: You can upgrade your characters and weapons in Bowmasters by spending coins or gems, which will increase their damage and range.
-Q: How can I play with my friends in Bowmasters?
-A: You can play with your friends in Bowmasters by inviting them to a duel or joining their duel via Facebook or a code.
-Q: How can I contact the developers of Bowmasters?
-A: You can contact the developers of Bowmasters by sending them an email at support@playgendary.com or visiting their website at https://playgendary.com/.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/CarX Drift Racing APK Mod Everything You Need to Know About This Amazing Drifting Game.md b/spaces/fatiXbelha/sd/CarX Drift Racing APK Mod Everything You Need to Know About This Amazing Drifting Game.md
deleted file mode 100644
index 0f82c2aa5e5311eeca137f238d18fe97cbf68f0b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/CarX Drift Racing APK Mod Everything You Need to Know About This Amazing Drifting Game.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-CarX Drift Racing APK Mod: A Review
-If you are a fan of car racing games, you might have heard of CarX Drift Racing. It is one of the most popular and realistic drifting games on Android devices. In this game, you can experience the thrill of drifting on various tracks with different cars. You can also customize your cars and tracks to suit your preferences. But what if you want to enjoy the game without any limitations or restrictions? That's where CarX Drift Racing APK Mod comes in handy. In this article, we will review this modded version of the game and show you how to download and install it on your device.
-carx drift racing apk mod Download File ->>->>->> https://urllie.com/2uNIif
- What is CarX Drift Racing?
-CarX Drift Racing is a racing game developed by CarX Technologies. It was released in 2014 and has since gained millions of downloads and positive reviews from players. The game is designed to simulate the real physics and graphics of drifting, which is a driving technique where the driver intentionally oversteers the car to make it slide sideways. The game features over 40 cars and 30 tracks, each with its own characteristics and challenges. You can also customize your cars with different parts, colors, stickers, and wheels. You can play the game in online or offline modes, competing with other players or against the AI.
- Features of CarX Drift Racing
-Realistic physics and graphics
-One of the main attractions of CarX Drift Racing is its realistic physics and graphics. The game uses a sophisticated car physics engine that accurately simulates the behavior of different cars on different surfaces. You can feel the difference between front-wheel drive, rear-wheel drive, and all-wheel drive cars, as well as between asphalt, grass, sand, and snow. The game also has stunning graphics that create a immersive environment for drifting. You can see the smoke, dust, sparks, and tire marks as you drift your way to victory.
- Customizable cars and tracks
-Another feature of CarX Drift Racing is its customization options. You can choose from over 40 cars, ranging from sports cars, muscle cars, supercars, to trucks. You can also modify your cars with different parts, such as engines, turbos, brakes, suspensions, transmissions, and more. You can also change the appearance of your cars with different colors, stickers, and wheels. You can even create your own tracks with the track editor, where you can adjust the layout, surface, weather, time of day, and obstacles.
-carx drift racing mod apk unlimited money
-carx drift racing 2 mod apk download
-carx drift racing hack apk android
-carx drift racing mod apk latest version
-carx drift racing mod apk revdl
-carx drift racing 2 mod apk unlimited gold
-carx drift racing mod apk obb
-carx drift racing hack apk ios
-carx drift racing mod apk rexdl
-carx drift racing 2 mod apk android 1
-carx drift racing mod apk all cars unlocked
-carx drift racing hack apk 2023
-carx drift racing mod apk offline
-carx drift racing 2 mod apk free shopping
-carx drift racing mod apk data
-carx drift racing hack apk no root
-carx drift racing mod apk online
-carx drift racing 2 mod apk happymod
-carx drift racing mod apk unlimited coins
-carx drift racing hack apk download
-carx drift racing mod apk android oyun club
-carx drift racing 2 mod apk obb download
-carx drift racing mod apk pure
-carx drift racing hack apk 2022
-carx drift racing mod apk unlimited everything
-carx drift racing 2 mod apk rexdl
-carx drift racing mod apk apkpure
-carx drift racing hack apk latest version
-carx drift racing mod apk unlocked
-carx drift racing 2 mod apk unlimited money and gold
-carx drift racing mod apk an1
-carx drift racing hack apk free download
-carx drift racing mod apk full version
-carx drift racing 2 mod apk offline
-carx drift racing mod apk android 1
-carx drift racing hack apk online
-carx drift racing mod apk no root
-carx drift racing 2 mod apk data download
-carx drift racing mod apk mega.nz
-carx drift racing hack apk unlimited money and gold
- Online and offline modes
-CarX Drift Racing also offers online and offline modes for different play styles. You can play the game offline in single-player mode, where you can practice your skills or complete various missions and challenges. You can also play the game online in multiplayer mode, where you can compete with other players from around the world in different modes, such as drift races, tandem drifts, time attacks, or freestyle drifts. You can also join or create your own club and chat with other members.
- Why download CarX Drift Racing APK Mod?
-Unlimited money and gold
-One of the reasons why you might want to download CarX Drift Racing APK Mod is because it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy new cars, upgrade parts, or unlock tracks. However, earning money and gold in the game can be slow and tedious. With CarX Drift Racing APK Mod, you can get unlimited money and gold for free. This way, you can buy and upgrade any car or track you want without worrying about the cost.
- Unlocked all cars and tracks
-Another reason why you might want to download CarX Drift Racing APK Mod is because it unlocks all cars and tracks for you. Normally, you have to earn money and gold to buy new cars or unlock new tracks. Some cars and tracks are also locked behind certain levels or achievements. This can limit your choices and enjoyment of the game. With CarX Drift Racing APK Mod, you can access all cars and tracks from the start. You can try out different combinations and find your favorite ones.
- No ads and root required
-A final reason why you might want to download CarX Drift Racing APK Mod is because it removes ads and root requirements from the game. Ads can be annoying and distracting, especially when they pop up in the middle of the game. They can also consume your data and battery. Rooting your device can be risky and complicated, as it can void your warranty, expose your device to malware, or cause system errors. With CarX Drift Racing APK Mod, you can enjoy the game without any ads or root permissions. You can play the game smoothly and safely.
- How to download and install CarX Drift Racing APK Mod?
-Step 1: Download the APK file from a trusted source
-The first step to download and install CarX Drift Racing APK Mod is to find a reliable source that provides the APK file. You can search online for websites that offer CarX Drift Racing APK Mod, but be careful of fake or malicious links that might harm your device. You can also use the link below to download the latest version of CarX Drift Racing APK Mod from our website.
- Download CarX Drift Racing APK Mod
- Step 2: Enable unknown sources on your device
-The second step to download and install CarX Drift Racing APK Mod is to enable unknown sources on your device. This is necessary because Android devices do not allow the installation of apps from sources other than the Google Play Store by default. To enable unknown sources, follow these steps:
-
-Go to Settings > Security > Unknown Sources.
-Toggle on the option to allow the installation of apps from unknown sources.
-Tap OK to confirm.
-
- Step 3: Install the APK file and launch the game
-The final step to download and install CarX Drift Racing APK Mod is to install the APK file and launch the game. To do this, follow these steps:
-
-Locate the downloaded APK file on your device's file manager or downloads folder.
-Tap on the file to start the installation process.
-Follow the instructions on the screen to complete the installation.
-Once the installation is done, tap on the game icon to launch it.
-Enjoy CarX Drift Racing with unlimited money, gold, cars, tracks, and no ads or root required.
-
- Conclusion
-CarX Drift Racing is a fun and realistic drifting game that lets you experience the thrill of sliding sideways on various tracks with different cars. You can also customize your cars and tracks to suit your preferences. However, if you want to enjoy the game without any limitations or restrictions, you should download CarX Drift Racing APK Mod. This modded version of the game gives you unlimited money, gold, cars, tracks, and no ads or root required. You can download CarX Drift Racing APK Mod from our website using the link below.
- Download CarX Drift Racing APK Mod
- FAQs
-
-Is CarX Drift Racing APK Mod safe?
-Yes, CarX Drift Racing APK Mod is safe to use as long as you download it from a trusted source like our website. We scan all our files with antivirus software before uploading them for your safety.
- Is CarX Drift Racing APK Mod compatible with my device?
-CarX Drift Racing APK Mod is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may not support some features or functions of the game due to hardware limitations or software issues.
- Can I play CarX Drift Racing online with CarX Drift Racing APK Mod?
-Yes, you can play Car X Drift Racing online with CarX Drift Racing APK Mod, as the mod does not affect the online mode of the game. You can still compete with other players in multiplayer mode or join clubs and chat with other members. However, you should be careful not to use the mod in a way that gives you an unfair advantage over other players, as this might result in a ban from the game.
- How can I update CarX Drift Racing APK Mod?
-To update CarX Drift Racing APK Mod, you need to download the latest version of the mod from our website and install it over the existing one. You do not need to uninstall the previous version or lose your progress. However, you should always back up your data before updating any app or game, just in case something goes wrong.
- Can I request a feature or report a bug for CarX Drift Racing APK Mod?
-Yes, you can request a feature or report a bug for CarX Drift Racing APK Mod by leaving a comment on our website or contacting us through our email. We appreciate your feedback and suggestions and we will try our best to improve our mod according to your needs.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Cheat Rebel Racing Unlock All the Cars and Upgrades You Want.md b/spaces/fatiXbelha/sd/Download Cheat Rebel Racing Unlock All the Cars and Upgrades You Want.md
deleted file mode 100644
index 371113eb80f0f1af6e05b7bd388aac266e60850a..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Cheat Rebel Racing Unlock All the Cars and Upgrades You Want.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-How to Download Cheat Rebel Racing and Win Every Race
-If you are a fan of racing games, you might have heard of Rebel Racing , one of the most popular and realistic racing games for mobile devices. In this game, you can race with real-world licensed cars, customize them to your liking, and compete against the world's elite drivers in various modes. However, you might also find the game challenging, especially if you want to unlock all the cars, upgrades, and features. That's why you might be interested in downloading Cheat Rebel Racing , a mod apk file that gives you unlimited cash, gold, fuel, and cars. In this article, we will show you how to download Cheat Rebel Racing and use it to win every race.
- What is Rebel Racing?
-Rebel Racing is a mobile racing game developed by Hutch Games, a company that specializes in racing games. The game was released in November 2019 for Android and iOS devices. It has received over 50 million downloads and positive reviews from players and critics alike.
-download cheat rebel racing Download ---> https://urllie.com/2uNAGG
-A realistic and thrilling racing game for mobile devices
-Rebel Racing is not your typical arcade racing game. It features realistic driving physics, speedfreak add-ons and turbos, epic overtakes, and stunning West Coast locations. You can feel the thrill of racing as you drift, draft, boost, and jump your way to victory.
-Features real-world licensed cars, stunning graphics, and various modes
-Rebel Racing boasts a collection of real-world classics and awesome supercars from iconic automobile manufacturers like Ford, Mitsubishi Motors, Ariel, Bugatti, and more. You can collect, customize, and upgrade your fleet of dream cars to suit your style and preferences. The game also has stunning graphics and effects that make the cars and environments look amazing. You can race in various modes such as career mode, boss races, limited-time events, tournaments, daily challenges, etc.
-Challenges players to compete against elite drivers and climb the ranks
-Rebel Racing is not an easy game. You will have to face some of the best drivers in America's most exclusive road racing event. You will have to use your skills, strategy, and cheats to beat them in high-octane, wheel-to-wheel action. You will also have to climb the ranks of the Rebel Racing tournament and earn trophies, rewards, and respect.
- Why Download Cheat Rebel Racing?
-If you love Rebel Racing but find it too hard or too expensive to progress in the game, you might want to download Cheat Rebel Racing. This is a mod apk file that gives you access to unlimited resources and features in the game. Here are some of the benefits of downloading Cheat Rebel Racing:
-To get unlimited cash, gold, fuel, and cars
-Cash, gold, and fuel are the main currencies in Rebel Racing. You need them to buy new cars, upgrade your existing ones, refill your gas tank, and enter races. However, they are not easy to earn in the game. You have to win races, complete challenges, watch ads, or spend real money to get them. With Cheat Rebel Racing, you don't have to worry about that. You can get unlimited cash, gold, and fuel for free. You can also get unlimited cars of any type and rarity. You can have the best and most expensive cars in the game without spending a dime.
-To unlock all upgrades and customizations
-Rebel Racing allows you to upgrade and customize your cars to improve their performance and appearance. You can upgrade their engine, transmission, tires, suspension, brakes, etc. You can also change their color, paint job, decals, rims, etc. However, these upgrades and customizations are not cheap. You have to spend a lot of cash and gold to unlock them. With Cheat Rebel Racing, you can unlock all the upgrades and customizations for free. You can make your cars faster, stronger, and more stylish without any limitations.
-To enjoy the game without any limitations or restrictions
-Rebel Racing is a fun and addictive game, but it also has some limitations and restrictions that can hamper your enjoyment. For example, you have a limited amount of fuel that you can use to enter races. Once you run out of fuel, you have to wait for it to regenerate or buy more with gold. You also have to deal with ads that pop up every now and then. With Cheat Rebel Racing, you can enjoy the game without any limitations or restrictions. You can race as much as you want without worrying about fuel. You can also disable the ads and play the game without any interruptions.
-download cheat rebel racing apk
-download cheat rebel racing tips and tricks
-download cheat rebel racing unlimited cash
-download cheat rebel racing mod apk
-download cheat rebel racing best cars
-download cheat rebel racing android
-download cheat rebel racing ios
-download cheat rebel racing hack
-download cheat rebel racing free
-download cheat rebel racing online
-download cheat rebel racing game guide
-download cheat rebel racing latest version
-download cheat rebel racing no root
-download cheat rebel racing no survey
-download cheat rebel racing offline
-download cheat rebel racing review
-download cheat rebel racing gameplay
-download cheat rebel racing walkthrough
-download cheat rebel racing tutorial
-download cheat rebel racing how to play
-download cheat rebel racing how to win
-download cheat rebel racing how to drift
-download cheat rebel racing how to boost
-download cheat rebel racing how to upgrade
-download cheat rebel racing how to unlock cars
-download cheat rebel racing cheats and hacks
-download cheat rebel racing cheats and codes
-download cheat rebel racing cheats and glitches
-download cheat rebel racing cheats and secrets
-download cheat rebel racing cheats and tricks
-download cheat rebel racing cheats and tips
-download cheat rebel racing cheats and guides
-download cheat rebel racing cheats and reviews
-download cheat rebel racing cheats and gameplay
-download cheat rebel racing cheats and walkthroughs
-download cheat rebel racing cheats and tutorials
-download cheat rebel racing cheats and strategies
-download cheat rebel racing cheats and solutions
-download cheat rebel racing cheats and answers
-download cheat rebel racing cheats and resources
- How to Download Cheat Rebel Racing?
-If you are convinced that Cheat Rebel Racing is the best way to enjoy the game, you might be wondering how to download it. The process is simple and easy. Just follow these steps:
-Find a reliable and safe source of the cheat mod apk file
-The first thing you need to do is find a reliable and safe source of the cheat mod apk file. There are many websites that claim to offer Cheat Rebel Racing, but not all of them are trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. To avoid that, you should do some research and check the reviews and ratings of the website before downloading anything from it. You should also scan the file with an antivirus program before installing it.
-Download and install the file on your device
-The next thing you need to do is download and install the file on your device. To do that, you need to enable the installation of apps from unknown sources on your device settings. This will allow you to install apps that are not from the official app store. Then, you need to locate the downloaded file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-Follow the instructions and grant the necessary permissions
-The last thing you need to do is follow the instructions and grant the necessary permissions for Cheat Rebel Racing to work properly. The app might ask you for some permissions such as access to your storage, network, location, etc. You need to grant these permissions for the app to function correctly. You might also need to verify your device or account with a code or captcha before using the app.
-Launch the game and enjoy the cheats
-Now that you have installed Cheat Rebel Racing on your device, you can launch the game and enjoy the cheats. You will see that you have unlimited cash, gold, fuel, and cars in your account. You can also unlock all the upgrades and customizations for your cars. You can enter any race and win it easily with your cheats. You can also disable the ads and play the game without any interruptions. You can have fun and enjoy the game to the fullest.
- Tips and Tricks for Using Cheat Rebel Racing
-Although Cheat Rebel Racing gives you a lot of advantages in the game, you still need to use some tips and tricks to make the most of it. Here are some of them:
-Use your boost wisely and strategically
-Your boost is a powerful tool that can help you speed up and overtake your opponents. However, you should not use it randomly or wastefully. You should use it wisely and strategically, depending on the situation. For example, you can use it at the start of the race to gain an early lead, or at the end of the race to secure your victory. You can also use it on straight roads or ramps to maximize your speed, or on curves or corners to avoid losing momentum.
-Follow the racing line and avoid collisions
-The racing line is the optimal path that you should follow on the track to achieve the best lap time. It is usually marked by a colored line on the road that changes from green to yellow to red, depending on your speed and angle. You should try to follow the racing line as much as possible, as it will help you improve your performance and efficiency. You should also avoid collisions with other cars or obstacles, as they will slow you down and damage your car.
-Take advantage of drafting and overtaking
-Drafting is a technique that involves following closely behind another car to reduce air resistance and increase your speed. Overtaking is a technique that involves passing another car to gain a better position or score. You should take advantage of both techniques to improve your chances of winning. You can draft behind another car until you have enough speed and boost, then overtake them when you see an opening or opportunity. You can also use your boost to overtake multiple cars at once or to create a gap between you and them.
-Experiment with different cars and settings
-Cheat Rebel Racing gives you access to unlimited cars of different types and rarities. You can experiment with different cars and settings to find the ones that suit your style and preferences. You can try different combinations of engine, transmission, tires, suspension, brakes, etc., to see how they affect your performance and handling. You can also try different colors, paint jobs, decals, rims, etc., to see how they affect your appearance and style.
- Conclusion
-Cheat Rebel Racing is a fun and easy way to enjoy Rebel Racing without any limitations or restrictions. It gives you unlimited cash, gold, fuel, and cars, as well as all the upgrades and customizations for your cars. It lets you dominate every race and become the best driver in the game. If you want to download Cheat Rebel Racing, just follow the steps we have provided in this article and enjoy the cheats.
- FAQs
-Is Cheat Rebel Racing safe to use?
-Cheat Rebel Racing is safe to use as long as you download it from a reliable and safe source. However, you should always be careful when downloading any mod apk file from the internet, as some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. You should also scan the file with an antivirus program before installing it.
-Does Cheat Rebel Racing work on iOS devices?
-Cheat Rebel Racing works on both Android and iOS devices. However, the installation process might be different for iOS devices. You might need to use a third-party app installer or jailbreak your device to install Cheat Rebel Racing on your iOS device.
-Can I use Cheat Rebel Racing online?
-Cheat Rebel Racing works both online and offline. You can use it online to play against other players in multiplayer mode or offline to play against AI opponents in single-player mode.
-Will I get banned for using Cheat Rebel Racing?
-Cheat Rebel Racing is undetectable by the game's anti-cheat system, so you will not get banned for using it. However, you should still be careful not to abuse the cheats or make them obvious to other players, as they might report you or complain about you.
-Where can I download Cheat Rebel Racing?
-You can download Cheat Rebel Racing from various websites that offer mod apk files for mobile games. However, not all of them are trustworthy or safe. You should do some research and check the reviews and ratings of the website before downloading anything from it. You can also use the link we have provided below to download Cheat Rebel Racing from a trusted source.
I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy racing!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download the Latest Version of Hill Climb Racing 2 APK for Android.md b/spaces/fatiXbelha/sd/Download the Latest Version of Hill Climb Racing 2 APK for Android.md
deleted file mode 100644
index 0539a45af3125ba2e071a39976449171e65a05a8..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download the Latest Version of Hill Climb Racing 2 APK for Android.md
+++ /dev/null
@@ -1,226 +0,0 @@
-
-Hill Climb Racing 2: A Fun and Challenging Racing Game
-If you are looking for a racing game that is easy to play but hard to master, then you should check out Hill Climb Racing 2. This game is the sequel to the popular Hill Climb Racing, which has been downloaded over half a billion times worldwide. Hill Climb Racing 2 takes the series to new heights with the introduction of online multiplayer, team mode, new vehicles, tracks, customization options, and more. In this article, we will tell you everything you need to know about this game, including its features, tips and tricks, review, and how to download it for free.
- Introduction
-What is Hill Climb Racing 2?
-Hill Climb Racing 2 is a 2D physics-based racing game developed by Fingersoft, a Finnish game studio. The game was released in December 2016 for Android and iOS devices. The game is a sequel to the original Hill Climb Racing, which was released in 2012.
-hill climb racing 2 apkmirror Download Zip === https://urllie.com/2uNFfr
-The game follows the adventures of Bill Newton, a daring driver who loves to race on hills, mountains, deserts, snow, and other terrains. You can control his vehicle by using two pedals: gas and brake. You have to balance speed and stability as you climb hills, jump over obstacles, perform stunts, and avoid crashing. You can also customize your vehicle's appearance and performance by buying new parts, skins, tuning parts, etc.
-Why should you play Hill Climb Racing 2?
-Hill Climb Racing 2 is a fun and addictive game that will keep you entertained for hours. Here are some reasons why you should play this game:
-hill climb racing 2 mod apk download apkmirror
-hill climb racing 2 latest version apk free download apkmirror
-hill climb racing 2 hack apk unlimited money and gems apkmirror
-hill climb racing 2 online multiplayer racing game apkmirror
-hill climb racing 2 apk for android 11 apkmirror
-hill climb racing 2 apk old version download apkmirror
-hill climb racing 2 apk no ads apkmirror
-hill climb racing 2 apk offline mode apkmirror
-hill climb racing 2 apk premium unlocked apkmirror
-hill climb racing 2 apk with all vehicles and tracks apkmirror
-hill climb racing 2 apk update download apkmirror
-hill climb racing 2 apk mirror site link apkmirror
-hill climb racing 2 apk file size and requirements apkmirror
-hill climb racing 2 apk review and rating apkmirror
-hill climb racing 2 apk installation guide and tips apkmirror
-hill climb racing 2 apk features and benefits apkmirror
-hill climb racing 2 apk comparison with other racing games apkmirror
-hill climb racing 2 apk best settings and configuration apkmirror
-hill climb racing 2 apk troubleshooting and support apkmirror
-hill climb racing 2 apk feedback and suggestions apkmirror
-hill climb racing 2 apk alternatives and similar apps apkmirror
-hill climb racing 2 apk news and updates apkmirror
-hill climb racing 2 apk cheats and tricks apkmirror
-hill climb racing 2 apk fun facts and trivia apkmirror
-hill climb racing 2 apk screenshots and videos apkmirror
-
-It has simple but challenging gameplay that requires skill and strategy.
-It has online multiplayer mode where you can race against other players from around the world.
-It has team mode where you can join or create a racing team with your friends and compete in seasons.
-It has arcade mode where you can perform cool stunt tricks and earn bonus coins.
-It has adventure mode where you can explore different tracks and environments.
-It has weekly events that change up the gameplay in new exciting ways.
-It has dozens of vehicles and customization options that suit your style and preference.
-It has colorful graphics, smooth animations, realistic physics, and catchy sound effects.
-It is free to download and play, with optional in-app purchases.
-
- Features of Hill Climb Racing 2
-Dozens of vehicles and customization options
-Hill Climb Racing 2 offers you a wide range of vehicles to choose from. You can start with a basic jeep, but as you progress in the game, you can unlock more cars, trucks, bikes, tanks, etc. Each vehicle has its own characteristics, such as speed, handling, suspension, grip, etc. You can also upgrade your vehicle's parts to improve its performance. For example, you can increase the engine power, reduce the weight, add roll cages, etc.
-You can buy new skins, paints, stickers, hats, etc. to make your vehicle look unique and cool. You can also customize your driver's appearance by changing his or her outfit, hairstyle, accessories, etc. You can mix and match different items to create your own style.
-Online multiplayer and team mode
-One of the most exciting features of Hill Climb Racing 2 is the online multiplayer mode. In this mode, you can race against up to three other players in real time. You can choose from different modes, such as cup races, friendly races, or ranked races. You can also select from different tracks and difficulty levels. The online multiplayer mode is a great way to test your skills and compete with other players from around the world.
-Another feature that adds more fun and social interaction to the game is the team mode. In this mode, you can join or create a racing team with your friends or other players. You can chat with your teammates, share tips and tricks, and support each other. You can also participate in team seasons, where you can race against other teams and earn points for your team. The team with the most points at the end of the season wins rewards and trophies.
-Arcade racing and stunt tricks
-Hill Climb Racing 2 is not just a racing game, but also an arcade game. The game has a lot of elements that make it fun and entertaining, such as stunt tricks, coins, gems, boosters, etc. You can perform cool stunt tricks by using the hills, ramps, loops, bridges, and other obstacles on the tracks. You can do flips, wheelies, backflips, frontflips, etc. to earn bonus coins and points. You can also collect coins and gems on the way to buy new vehicles and parts.
-The game also has boosters that can help you speed up or slow down your vehicle. For example, you can use nitro to boost your speed, magnets to attract coins and gems, shields to protect you from damage, etc. You have to be careful though, as some boosters can also have negative effects on your vehicle. For example, using too much nitro can overheat your engine or make you lose control of your vehicle.
-Various tracks and environments
-Hill Climb Racing 2 has a variety of tracks and environments that make the game more diverse and interesting. You can race on different terrains, such as hills, mountains, deserts, snow, forests, beaches, etc. Each terrain has its own challenges and obstacles that you have to overcome. For example, on snow tracks, you have to deal with slippery roads and snowmen; on desert tracks, you have to avoid cacti and sandstorms; on forest tracks, you have to dodge trees and animals; etc.
-you have to deal with reduced visibility and slippery roads; on night tracks, you have to use your headlights and watch out for dark shadows; etc. The game also has different themes and events that change the appearance and atmosphere of the tracks. For example, on Halloween tracks, you can see pumpkins, ghosts, and bats; on Christmas tracks, you can see snowmen, presents, and reindeer; etc.
-Weekly events and rewards
-Hill Climb Racing 2 also has weekly events that add more variety and excitement to the game. Every week, there is a new event that changes the rules and conditions of the game. For example, there are events that limit your vehicle's fuel, increase your vehicle's weight, reverse your vehicle's controls, etc. These events challenge you to adapt to different situations and strategies.
-By participating in these events, you can earn rewards such as coins, gems, chests, tickets, etc. You can also earn trophies that increase your rank and reputation. The higher your rank, the more rewards you can get. You can also unlock new vehicles and tracks by reaching certain ranks. The weekly events are a great way to test your skills and earn more rewards.
- Tips and Tricks for Hill Climb Racing 2
-How to handle jumps and landings
-One of the most important skills in Hill Climb Racing 2 is how to handle jumps and landings. Jumps and landings can make or break your race, as they can affect your speed, stability, and damage. Here are some tips on how to handle jumps and landings:
-
-When you approach a jump, try to adjust your speed and angle so that you can land smoothly on the other side. Avoid going too fast or too slow, as this can make you overshoot or undershoot the landing.
-When you are in the air, try to tilt your vehicle forward or backward to align it with the slope of the landing. Avoid landing on your nose or tail, as this can cause damage or flip your vehicle over.
-When you land, try to use your brake or gas pedal to balance your vehicle and maintain your momentum. Avoid braking too hard or accelerating too fast, as this can make you lose control or waste fuel.
-When you perform a stunt trick in the air, such as a flip or a wheelie, try to time it so that you can land safely and earn bonus coins. Avoid doing too many tricks or landing upside down, as this can cause damage or crash your vehicle.
-
-How to upgrade your vehicle and parts
-Another important skill in Hill Climb Racing 2 is how to upgrade your vehicle and parts. Upgrading your vehicle and parts can improve your performance and give you an edge over your opponents. Here are some tips on how to upgrade your vehicle and parts:
-
-When you upgrade your vehicle's parts, try to focus on the ones that suit your playstyle and preference. For example, if you like speed, upgrade your engine; if you like stability, upgrade your suspension; if you like grip, upgrade your tires; etc.
-also upgrade your tires; etc.
-When you upgrade your vehicle's parts, try to use the tuning parts that match your vehicle's type and track's condition. For example, if you use a bike, use the wheelie boost; if you use a truck, use the weight distribution; if you race on a snowy track, use the snow tires; etc.
-When you upgrade your vehicle's parts, try to save some coins and gems for the next vehicle or part that you want to unlock. Don't spend all your money on one vehicle or part, as you might regret it later.
-How to win races and earn gems
-Another important skill in Hill Climb Racing 2 is how to win races and earn gems. Winning races and earning gems can help you progress faster in the game and unlock more content. Here are some tips on how to win races and earn gems:
-
-When you race in online multiplayer mode, try to choose the track and difficulty level that suit your vehicle and skill. Avoid racing on tracks that are too hard or too easy for you, as this can affect your chances of winning.
-When you race in online multiplayer mode, try to use your boosters wisely. Don't waste them on unnecessary moments, such as when you are already ahead or behind. Save them for critical moments, such as when you need to catch up or overtake.
-When you race in online multiplayer mode, try to avoid crashing or damaging your vehicle. Crashing or damaging your vehicle can slow you down or end your race prematurely. Try to drive carefully and avoid hitting obstacles or other vehicles.
-When you race in online multiplayer mode, try to collect as many coins and gems as possible. Coins and gems can help you buy new vehicles and parts, as well as upgrade them. They can also help you enter more races and events.
-When you race in team mode, try to cooperate with your teammates and support them. Don't compete with them or sabotage them. Share tips and tricks, chat with them, and cheer them on. By working together, you can increase your team's points and rank.
-How to unlock more vehicles and tracks
-Another important skill in Hill Climb Racing 2 is how to unlock more vehicles and tracks. Unlocking more vehicles and tracks can make the game more fun and diverse. Here are some tips on how to unlock more vehicles and tracks:
-
-When you want to unlock a new vehicle, try to save enough coins and gems to buy it. You can also get lucky and find it in a chest or a ticket.
-When you want to unlock a new track, try to reach the required rank or level to access it. You can also get lucky and find it in a chest or a ticket.
-When you want to unlock a new skin, paint, sticker, hat, etc., try to complete the challenges or achievements that reward them. You can also get lucky and find them in a chest or a ticket.
-When you want to unlock a new outfit, hairstyle, accessory, etc., try to collect enough scraps or tokens to craft them. You can also get lucky and find them in a chest or a ticket.
-
-How to use boost and fuel efficiently
-Another important skill in Hill Climb Racing 2 is how to use boost and fuel efficiently. Boost and fuel are essential resources that can help you speed up or slow down your vehicle. Here are some tips on how to use boost and fuel efficiently:
-
-When you use boost, try to time it so that you can gain the most speed and distance. Don't use it when you are already at top speed or when you are about to hit an obstacle or a slope.
-When you use boost, try to avoid overheating your engine or losing control of your vehicle. Overheating your engine can damage it or make it explode. Losing control of your vehicle can make you crash or flip over.
-When you use fuel, try to conserve it as much as possible. Don't waste it by accelerating too much or braking too hard. Try to maintain a steady speed and momentum.
-When you use fuel, try to refill it whenever possible. You can find fuel cans on the tracks that can replenish your fuel tank. You can also use fuel boosters that can increase your fuel capacity or efficiency.
-
- Review of Hill Climb Racing 2
-Pros and cons of the game
-has many pros and cons that can affect your enjoyment and satisfaction. Here are some of the pros and cons of the game:
-
-
-Pros
-Cons
-
-
-Simple but challenging gameplay that requires skill and strategy.
-Some tracks and vehicles can be too hard or too easy for some players.
-
-
-Online multiplayer mode where you can race against other players from around the world.
-Some players can cheat or hack the game to gain unfair advantages.
-
-
-Team mode where you can join or create a racing team with your friends and compete in seasons.
-Some teams can be inactive or uncooperative.
-
-
-Arcade mode where you can perform cool stunt tricks and earn bonus coins.
-Some stunt tricks can be risky or impossible to perform.
-
-
-Adventure mode where you can explore different tracks and environments.
-Some tracks and environments can be repetitive or boring.
-
-
-Weekly events that change up the gameplay in new exciting ways.
-Some events can be frustrating or unfair.
-
-
-Dozens of vehicles and customization options that suit your style and preference.
-Some vehicles and customization options can be expensive or hard to unlock.
-
-
-Colorful graphics, smooth animations, realistic physics, and catchy sound effects.
-Some graphics, animations, physics, and sound effects can be glitchy or buggy.
-
-
-Free to download and play, with optional in-app purchases.
-Some in-app purchases can be overpriced or unnecessary.
-
-
- User ratings and feedback
-Hill Climb Racing 2 has received mostly positive ratings and feedback from users who have played the game. The game has an average rating of 4.4 out of 5 stars on Google Play Store and 4.6 out of 5 stars on App Store. The game has also been downloaded over 100 million times on Google Play Store and over 10 million times on App Store. Here are some of the user reviews from both platforms:
- "This game is awesome! I love the graphics, the gameplay, the vehicles, the tracks, everything! It's so fun and addictive, I can't stop playing it. The online multiplayer mode is also great, I enjoy racing against other players from around the world. The team mode is also cool, I like joining a team with my friends and competing in seasons. The weekly events are also fun, they keep the game fresh and exciting. I highly recommend this game to anyone who likes racing games." - Google Play Store user
- "This game is amazing! I love the physics, the animations, the sounds, the customization, everything! It's so entertaining and challenging, I always want to play it. The online multiplayer mode is also awesome, I like racing against other players from different countries. The team mode is also nice, I like creating a team with my family and competing in seasons. The weekly events are also fun, they change the game in new interesting ways. I strongly suggest this game to anyone who likes arcade games." - App Store user
- "This game is good, but it has some flaws. I like the gameplay, the vehicles, the tracks, etc., but it can also be frustrating and unfair. The online multiplayer mode is sometimes laggy or buggy, some players cheat or hack the game to win easily. The team mode is sometimes boring or annoying, some teams are inactive or uncooperative. The weekly events are sometimes hard or impossible to complete. The vehicles and customization options are sometimes expensive or hard to unlock. I hope the developers fix these issues soon." - Google Play Store user
- "This game is decent, but it has some problems. I like the physics, the animations, the sounds, etc., but it can also be glitchy or buggy. The online multiplayer mode is sometimes unstable or broken, some players use mods or hacks to gain unfair advantages. The team mode is sometimes dull or irritating, some teams are lazy or rude. The weekly events are sometimes easy or boring. The vehicles and customization options are sometimes cheap or ugly. I wish the developers improve these aspects soon." - App Store user
- Comparison with the original game
-original game is also a 2D physics-based racing game that follows the same protagonist, Bill Newton, as he races on various terrains and performs stunt tricks. However, the sequel has many improvements and additions that make it superior to the original game. Here are some of the differences between the two games:
-
-
-Hill Climb Racing
-Hill Climb Racing 2
-
-
-No online multiplayer mode.
-Online multiplayer mode where you can race against other players from around the world.
-
-
-No team mode.
-Team mode where you can join or create a racing team with your friends and compete in seasons.
-
-
-No arcade mode.
-Arcade mode where you can perform cool stunt tricks and earn bonus coins.
-
-
-No adventure mode.
-Adventure mode where you can explore different tracks and environments.
-
-
-No weekly events.
-Weekly events that change up the gameplay in new exciting ways.
-
-
-Fewer vehicles and customization options.
-Dozens of vehicles and customization options that suit your style and preference.
-
-
-Simpler graphics, animations, physics, and sound effects.
-Colorful graphics, smooth animations, realistic physics, and catchy sound effects.
-
-
-More in-app purchases and ads.
-Fewer in-app purchases and ads.
-
-
- Conclusion
-Summary of the main points
-Hill Climb Racing 2 is a fun and challenging racing game that is easy to play but hard to master. The game has many features that make it diverse and interesting, such as online multiplayer mode, team mode, arcade mode, adventure mode, weekly events, vehicles and customization options, etc. The game also has simple but challenging gameplay that requires skill and strategy, as well as colorful graphics, smooth animations, realistic physics, and catchy sound effects. The game is free to download and play, with optional in-app purchases. The game is a sequel to the original Hill Climb Racing, which was also a popular racing game. However, the sequel has many improvements and additions that make it superior to the original game.
- Call to action
-If you are interested in playing Hill Climb Racing 2, you can download it for free from Google Play Store or App Store. You can also visit the official website of the game for more information and updates. You can also follow the game on social media platforms such as Facebook, Twitter, Instagram, YouTube, etc. You can also join the game's community on Reddit, Discord, etc. to chat with other players, share tips and tricks, and give feedback to the developers. If you enjoy playing Hill Climb Racing 2, don't forget to rate it and leave a review on the app store. You can also invite your friends to play with you and create a racing team together. Have fun and good luck!
- FAQs
- Here are some of the frequently asked questions about Hill Climb Racing 2:
-
- How do I download Hill Climb Racing 2 for free?
- You can download Hill Climb Racing 2 for free from Google Play Store or App Store. Just search for the game's name on the app store and tap on the install button. The game will be downloaded and installed on your device automatically. You can also use this link to download the game for Android devices: Hill Climb Racing 2 - Apps on Google Play . You can also use this link to download the game for iOS devices: Hill Climb Racing 2 on the App Store .
- How do I play Hill Climb Racing 2 online?
- You can play Hill Climb Racing 2 online by tapping on the race button on the main menu. You can then choose from different modes such as cup races, friendly races, or ranked races. You can also select from different tracks and difficulty levels. You will then be matched with up to three other players from around the world. The race will start after a few seconds of countdown. You can control your vehicle by using two pedals: gas and brake. You have to balance speed and stability as you climb hills, jump over obstacles, perform stunts, and avoid crashing. The first player to reach the finish line wins the race.
- How do I join or create a racing team in Hill Climb Racing 2?
- You can join or create a racing team in Hill Climb Racing 2 by tapping on the team button on the main menu. You can then choose to join an existing team or create your own team. If you want to join an existing team, you can search for a team by name, rank, or region. You can also browse the list of recommended teams or join a random team. You can then apply to join the team and wait for the approval of the team leader. If you want to create your own team, you can choose a name, a logo, a description, and a region for your team. You can also set the minimum rank and level required for joining your team. You can then invite your friends or other players to join your team. You can also chat with your teammates, share tips and tricks, and support each other. You can also participate in team seasons, where you can race against other teams and earn points for your team. The team with the most points at the end of the season wins rewards and trophies.
- How do I perform stunt tricks in Hill Climb Racing 2?
- You can perform stunt tricks in Hill Climb Racing 2 by using the hills, ramps, loops, bridges, and other obstacles on the tracks. You can do flips, wheelies, backflips, frontflips, etc. to earn bonus coins and points. To perform a stunt trick, you have to tilt your vehicle forward or backward while you are in the air. You have to time it so that you can land safely and smoothly on the other side. You also have to avoid landing on your nose or tail, as this can cause damage or flip your vehicle over. You can also use boosters such as nitro or wings to help you perform more stunt tricks.
- How do I unlock new vehicles and tracks in Hill Climb Racing 2?
- You can unlock new vehicles and tracks in Hill Climb Racing 2 by reaching certain ranks or levels in the game. You can also unlock them by finding them in chests or tickets that you can earn by winning races, completing challenges, participating in events, etc. You can also buy them with coins or gems that you can collect on the tracks or purchase with real money.
- How do I download Hill Climb Racing 2 from apkmirror?
- You can download Hill Climb Racing 2 from apkmirror by following these steps:
-
-Go to apkmirror.com on your web browser.
-Search for Hill Climb Racing 2 on the search bar.
-Select the latest version of the game from the list of results.
-Scroll down and tap on the download button.
-Wait for the download to finish and then open the file.
-Allow the installation of apps from unknown sources if prompted.
-Follow the instructions on the screen to install the game.
-Enjoy playing Hill Climb Racing 2!
-
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/skin_mask.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/skin_mask.py
deleted file mode 100644
index a8a74e4c3b40d13b0258b83a12f56321a85bb179..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/skin_mask.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""This script is to generate skin attention mask for Deep3DFaceRecon_pytorch
-"""
-
-import math
-import numpy as np
-import os
-import cv2
-
-class GMM:
- def __init__(self, dim, num, w, mu, cov, cov_det, cov_inv):
- self.dim = dim # feature dimension
- self.num = num # number of Gaussian components
- self.w = w # weights of Gaussian components (a list of scalars)
- self.mu= mu # mean of Gaussian components (a list of 1xdim vectors)
- self.cov = cov # covariance matrix of Gaussian components (a list of dimxdim matrices)
- self.cov_det = cov_det # pre-computed determinet of covariance matrices (a list of scalars)
- self.cov_inv = cov_inv # pre-computed inverse covariance matrices (a list of dimxdim matrices)
-
- self.factor = [0]*num
- for i in range(self.num):
- self.factor[i] = (2*math.pi)**(self.dim/2) * self.cov_det[i]**0.5
-
- def likelihood(self, data):
- assert(data.shape[1] == self.dim)
- N = data.shape[0]
- lh = np.zeros(N)
-
- for i in range(self.num):
- data_ = data - self.mu[i]
-
- tmp = np.matmul(data_,self.cov_inv[i]) * data_
- tmp = np.sum(tmp,axis=1)
- power = -0.5 * tmp
-
- p = np.array([math.exp(power[j]) for j in range(N)])
- p = p/self.factor[i]
- lh += p*self.w[i]
-
- return lh
-
-
-def _rgb2ycbcr(rgb):
- m = np.array([[65.481, 128.553, 24.966],
- [-37.797, -74.203, 112],
- [112, -93.786, -18.214]])
- shape = rgb.shape
- rgb = rgb.reshape((shape[0] * shape[1], 3))
- ycbcr = np.dot(rgb, m.transpose() / 255.)
- ycbcr[:, 0] += 16.
- ycbcr[:, 1:] += 128.
- return ycbcr.reshape(shape)
-
-
-def _bgr2ycbcr(bgr):
- rgb = bgr[..., ::-1]
- return _rgb2ycbcr(rgb)
-
-
-gmm_skin_w = [0.24063933, 0.16365987, 0.26034665, 0.33535415]
-gmm_skin_mu = [np.array([113.71862, 103.39613, 164.08226]),
- np.array([150.19858, 105.18467, 155.51428]),
- np.array([183.92976, 107.62468, 152.71820]),
- np.array([114.90524, 113.59782, 151.38217])]
-gmm_skin_cov_det = [5692842.5, 5851930.5, 2329131., 1585971.]
-gmm_skin_cov_inv = [np.array([[0.0019472069, 0.0020450759, -0.00060243998],[0.0020450759, 0.017700525, 0.0051420014],[-0.00060243998, 0.0051420014, 0.0081308950]]),
- np.array([[0.0027110141, 0.0011036990, 0.0023122299],[0.0011036990, 0.010707724, 0.010742856],[0.0023122299, 0.010742856, 0.017481629]]),
- np.array([[0.0048026871, 0.00022935172, 0.0077668377],[0.00022935172, 0.011729696, 0.0081661865],[0.0077668377, 0.0081661865, 0.025374353]]),
- np.array([[0.0011989699, 0.0022453172, -0.0010748957],[0.0022453172, 0.047758564, 0.020332102],[-0.0010748957, 0.020332102, 0.024502251]])]
-
-gmm_skin = GMM(3, 4, gmm_skin_w, gmm_skin_mu, [], gmm_skin_cov_det, gmm_skin_cov_inv)
-
-gmm_nonskin_w = [0.12791070, 0.31130761, 0.34245777, 0.21832393]
-gmm_nonskin_mu = [np.array([99.200851, 112.07533, 140.20602]),
- np.array([110.91392, 125.52969, 130.19237]),
- np.array([129.75864, 129.96107, 126.96808]),
- np.array([112.29587, 128.85121, 129.05431])]
-gmm_nonskin_cov_det = [458703648., 6466488., 90611376., 133097.63]
-gmm_nonskin_cov_inv = [np.array([[0.00085371657, 0.00071197288, 0.00023958916],[0.00071197288, 0.0025935620, 0.00076557708],[0.00023958916, 0.00076557708, 0.0015042332]]),
- np.array([[0.00024650150, 0.00045542428, 0.00015019422],[0.00045542428, 0.026412144, 0.018419769],[0.00015019422, 0.018419769, 0.037497383]]),
- np.array([[0.00037054974, 0.00038146760, 0.00040408765],[0.00038146760, 0.0085505722, 0.0079136286],[0.00040408765, 0.0079136286, 0.010982352]]),
- np.array([[0.00013709733, 0.00051228428, 0.00012777430],[0.00051228428, 0.28237113, 0.10528370],[0.00012777430, 0.10528370, 0.23468947]])]
-
-gmm_nonskin = GMM(3, 4, gmm_nonskin_w, gmm_nonskin_mu, [], gmm_nonskin_cov_det, gmm_nonskin_cov_inv)
-
-prior_skin = 0.8
-prior_nonskin = 1 - prior_skin
-
-
-# calculate skin attention mask
-def skinmask(imbgr):
- im = _bgr2ycbcr(imbgr)
-
- data = im.reshape((-1,3))
-
- lh_skin = gmm_skin.likelihood(data)
- lh_nonskin = gmm_nonskin.likelihood(data)
-
- tmp1 = prior_skin * lh_skin
- tmp2 = prior_nonskin * lh_nonskin
- post_skin = tmp1 / (tmp1+tmp2) # posterior probability
-
- post_skin = post_skin.reshape((im.shape[0],im.shape[1]))
-
- post_skin = np.round(post_skin*255)
- post_skin = post_skin.astype(np.uint8)
- post_skin = np.tile(np.expand_dims(post_skin,2),[1,1,3]) # reshape to H*W*3
-
- return post_skin
-
-
-def get_skin_mask(img_path):
- print('generating skin masks......')
- names = [i for i in sorted(os.listdir(
- img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i]
- save_path = os.path.join(img_path, 'mask')
- if not os.path.isdir(save_path):
- os.makedirs(save_path)
-
- for i in range(0, len(names)):
- name = names[i]
- print('%05d' % (i), ' ', name)
- full_image_name = os.path.join(img_path, name)
- img = cv2.imread(full_image_name).astype(np.float32)
- skin_img = skinmask(img)
- cv2.imwrite(os.path.join(save_path, name), skin_img.astype(np.uint8))
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bed Wars MOD APK How to Get Unlimited Gcubes and Keys for Free.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bed Wars MOD APK How to Get Unlimited Gcubes and Keys for Free.md
deleted file mode 100644
index 80ad449de68ab1b72f47fff90e3593efd00fc334..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bed Wars MOD APK How to Get Unlimited Gcubes and Keys for Free.md
+++ /dev/null
@@ -1,219 +0,0 @@
-
-Bed Wars Mod APK: How to Get Unlimited Gcubes and Keys
-If you are a fan of Bed Wars, a popular team-based PVP game on Roblox and Minecraft, you might be interested in getting unlimited Gcubes and Keys to buy more items and perks in the game. In this article, we will show you how to download and install Bed Wars Mod APK, a modified version of the original game that gives you access to unlimited resources. We will also share some tips and tricks to help you win Bed Wars every time.
- What is Bed Wars?
-Bed Wars is a game mode in Roblox and Minecraft where four teams of four players each play in one match. Each team has its own bed, and they must protect it so players within the team can still revive to defend their base. To play Bed Wars, you need to log onto a server like mc.hypixel.net or robloxbedwars.fandom.com. Then, you need to select the Bed Wars game mode, team size, and map.
-bed wars mod apk (unlimited gcubes and keys) Download >>>>> https://gohhs.com/2uPpvJ
- A team-based PVP game
-The gameplay mechanics are simple, and you can use the arrow keys or "WASD" to move, the "Shift" key to run, Tab to open your inventory, Q to drop a block, "Space" key to jump, and the "Control" key, C, or "CapsLock" to crouch. Team islands have a bed in front and a resource generator in back. The bed is the respawn source of a team and must be protected from enemy teams, while the resource generator spawns Iron and Gold (including Emeralds with the Emerald Forge upgrade) to purchase items at the Item Shop.
- A remake of Hypixel's famous BedWars
-There are islands separate from team islands with Diamond and Emerald Generators. Diamonds generate at a moderate speed and can be used to purchase team upgrades or traps, while Emeralds generate at a slower speed and can be used to purchase stronger items from the Item Shop. These generator islands are usually close to islands of other teams, which can be raided by other teams to break their bed. Blocks and tools are available at shops to defend and break bed protection respectively, but beds do not require tools to be broken. If a team's bed is broken, players on that team will lose their respawn ability and be eliminated upon dying once more. Over time, events will happen to speed up the game. These events start with Diamond and Emerald generator upgrades up to level 3, then bed destruction, then sudden death where Dragons spawn. When all opposing beds and players are eliminated, the last team alive wins.
- What is Bed Wars Mod APK?
-Bed Wars Mod APK is a modified version of the original game that gives you access to unlimited resources such as Gcubes and Keys. Gcubes are the premium currency in Bed Wars that can be used to buy special items such as balloons, pearls, fireballs, ender chests, etc. Keys are another currency that can be used to buy privileges such as VIP, MVP, and MVP+. These privileges give you access to exclusive cosmetics, perks, and commands in the game. Normally, you would have to pay real money or complete tasks to get Gcubes and Keys, but with Bed Wars Mod APK, you can get them for free and without any limits.
- A modified version of the original game
-Bed Wars Mod APK is not an official version of the game, but a modified one that has been created by third-party developers. This means that it is not available on the official app stores like Google Play or Apple Store, but you have to download it from other sources. It also means that it may not be compatible with the latest updates of the original game, and it may have some bugs or glitches. However, it also means that it has some features that the original game does not have, such as unlimited Gcubes and Keys.
- Features of Bed Wars Mod APK
-Unlimited Gcubes and Keys
-The main feature of Bed Wars Mod APK is that it gives you unlimited Gcubes and Keys, which are the two most important currencies in the game. With unlimited Gcubes and Keys, you can buy any item or privilege you want without worrying about running out of them. You can also use them to upgrade your items and team perks faster and easier. This will give you a huge advantage over other players who have to earn or buy them with real money.
- Other benefits
-Aside from unlimited Gcubes and Keys, Bed Wars Mod APK also has some other benefits that make the game more fun and enjoyable. For example, it has a built-in anti-ban system that prevents you from getting banned by the game developers for using a modded version. It also has a user-friendly interface that makes it easy to navigate and use. It also has no ads or pop-ups that may interrupt your gaming experience. It also has no root or jailbreak requirements, which means you can install it on any device without any risks.
- How to Download and Install Bed Wars Mod APK?
-If you want to download and install Bed Wars Mod APK on your device, you need to follow some simple steps. Here they are:
-bed wars hack apk download free gcubes and keys
-bed wars modded apk unlimited money and resources
-bed wars cheats apk unlock all skins and maps
-bed wars premium apk free gcubes and keys generator
-bed wars cracked apk unlimited coins and gems
-bed wars latest mod apk unlimited everything
-bed wars pro apk free download with gcubes and keys
-bed wars mod menu apk unlimited health and damage
-bed wars hack version apk free gcubes and keys no verification
-bed wars mod apk 2023 unlimited gcubes and keys update
-bed wars hacked apk download unlimited money and resources
-bed wars modded apk 2023 free gcubes and keys latest version
-bed wars cheat apk unlock all items and modes
-bed wars vip apk free gcubes and keys hack
-bed wars full apk unlimited coins and gems mod
-bed wars new mod apk unlimited all features
-bed wars plus apk free download with gcubes and keys
-bed wars mod apk unlimited gcubes and keys android 1
-bed wars hack tool apk free gcubes and keys no survey
-bed wars mod apk 2023 unlimited gcubes and keys download
-bed wars hack apk 2023 unlimited money and resources
-bed wars modded apk download free gcubes and keys no root
-bed wars cheat engine apk unlock all characters and weapons
-bed wars gold apk free gcubes and keys online
-bed wars mega mod apk unlimited coins and gems hack
-bed wars best mod apk unlimited everything 2023
-bed wars plus plus apk free download with gcubes and keys
-bed wars mod apk unlimited gcubes and keys rexdl
-bed wars hack online apk free gcubes and keys without human verification
-bed wars mod apk 2023 unlimited gcubes and keys for android
- Steps to download
-
-Go to a reliable website that offers Bed Wars Mod APK for free download. You can search for it on Google or use this link: .
-Click on the download button and wait for the file to be downloaded on your device. The file size is about 100 MB, so make sure you have enough storage space and a stable internet connection.
-Once the file is downloaded, locate it in your file manager or downloads folder and tap on it to open it.
-
- Steps to install
-
-Before you install Bed Wars Mod APK, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the official app stores. To do this, go to your device settings > security > unknown sources and toggle it on.
-After enabling unknown sources, go back to the file manager or downloads folder and tap on the Bed Wars Mod APK file again.
-A pop-up window will appear asking you to confirm the installation. Click on install and wait for the process to finish.
-Once the installation is done, you will see a Bed Wars icon on your home screen or app drawer. Tap on it to launch the game and enjoy unlimited Gcubes and Keys.
-
- How to Use Bed Wars Mod APK?
-Using Bed Wars Mod APK is very easy and similar to using the original game. You just need to follow these steps:
- How to access unlimited Gcubes and Keys
-
-Open the game and log in with your account or create a new one if you don't have one already.
-Go to the shop menu where you can buy items and privileges with Gcubes and Keys.
-You will see that your Gcubes and Keys balance is unlimited and you can buy anything you want without spending any real money.
-You can also use Gcubes and Keys to upgrade your items and team perks in the game lobby or during a match.
-
- How to use them wisely
-Although you have unlimited Gcubes and Keys, you should still use them wisely and not waste them on unnecessary things. Here are some tips on how to use them wisely:
-
-Don't buy items or privileges that you don't need or already have. For example, don't buy balloons if you already have enough of them, or don't buy VIP or MVP if you already have MVP+.
-Buy items or privileges that can help you win the game or improve your gaming experience. For example, buy fireballs, pearls, ender chests, etc. to attack or defend your bed, or buy VIP or MVP to get more cosmetics, perks, and commands.
-Don't spam items or privileges that may annoy other players or cause lag in the game. For example, don't spam balloons, fireballs, pearls, etc. in a match, or don't spam commands like /fly, /nick, /chatcolor, etc. in the chat.
-Don't brag about having unlimited Gcubes and Keys or use them to bully other players. For example, don't show off your items or privileges in the lobby or in a match, or don't use them to destroy other beds or kill other players without mercy.
-
- Tips and Tricks to Win Bed Wars Every Time
-Having unlimited Gcubes and Keys can give you a big advantage in Bed Wars, but it doesn't guarantee that you will win every time. You still need to have some skills and strategies to play the game well. Here are some tips and tricks to help you win Bed Wars every time:
- Protect your bed
-Your bed is your lifeline in Bed Wars, and you should protect it at all costs. You can use blocks, tools, traps, and items to defend your bed from enemy attacks. You can also use Gcubes and Keys to buy better bed protection materials and upgrades. Here are some examples of bed protection materials and upgrades:
-
-
-Material
-Cost
-Effect
-
-
-Wool
-4 Iron
-Cheap and easy to get, but weak against tools and fireballs.
-
-
-Wood
-4 Gold
-Stronger than wool, but weak against axes and fireballs.
-
-
-End Stone
-12 Iron
-Stronger than wood, but weak against pickaxes and fireballs.
-
-
-Obsidian
-4 Emeralds
-The strongest material, but very expensive and weak against diamond pickaxes.
-
-
-Glass
-12 Iron
-Blast-proof against fireballs and TNT, but weak against any tool.
-
-
-Ladders
-4 Iron
-Blast-proof against fireballs and TNT, but weak against any tool. Can also be used to climb up walls.
-
- Bed Protection Upgrades (Team Upgrades)
- These upgrades can be bought with Diamonds at the Team Upgrade Shop.
- Name Cost Effect
- Reinforced Armor I-IV 2/4/6/8 Diamonds Gives your team +1/+2/+3/+4 armor points respectively.
- Sharpness I-II 4/8 Diamonds Gives your team +1/+2 attack damage respectively.
- Haste I-II 2/4 Diamonds Gives your team +20%/+40% mining speed respectively.
- Heal Pool I-II 3/6 Diamonds Gives your team regeneration I/II within 8 blocks of your bed respectively.
- Trap I-III 1/2/4 Diamonds Sets a trap that activates when an enemy enters your base. The traps are: It's a trap! (gives blindness and slowness), Counter-Offensive Trap (gives jump boost V), Alarm Trap (reveals invisible enemies).
- Dragon Buff I-II 5/10 Diamonds Gives your team +1/+2 dragons during sudden death respectively.
- Bed Protection Items (Item Shop)
- Name Cost Effect
- TNT 8 Gold / 1 Gcube < td>Explodes after 4 seconds, destroying blocks and damaging enemies.
- Fireball 40 Iron / 1 Gcube Shoots a fireball that explodes on impact, destroying blocks and knocking back enemies.
- Ender Pearl 4 Emeralds / 1 Gcube Teleports you to the location where it lands.
- Invisibility Potion 2 Emeralds / 1 Gcube Makes you invisible for 30 seconds, but you will still be visible if you wear armor or hold items.
- Jump Boost Potion 1 Emerald / 1 Gcube Gives you jump boost V for 45 seconds, allowing you to jump higher and farther.
- Speed Potion 1 Emerald / 1 Gcube Gives you speed II for 45 seconds, allowing you to move faster.
-
- Destroy other beds
-The ultimate goal of Bed Wars is to destroy the beds of other teams and eliminate them from the game. You can use blocks, tools, items, and strategies to attack other beds and break their protection. You can also use Gcubes and Keys to buy better tools and items to raid other bases. Here are some examples of tools and items that can help you destroy other beds:
-
-
-Tool
-Cost
-Effect
-
-
-Wooden Sword
-Free
-The default weapon that deals 4 damage per hit.
-
-
-Stone Sword
-10 Iron / 1 Gcube
-A better weapon that deals 5 damage per hit.
-
-
-Iron Sword
-7 Gold / 1 Gcube
-An even better weapon that deals 6 damage per hit.
-
-
-Diamond Sword
-4 Emeralds / 1 Gcube
-The best weapon that deals 7 damage per hit.
-
- Tools (Item Shop)
- Name Cost Effect
- Shears < 20 Iron / 1 Gcube
-A tool that can break wool and glass faster.
-
-
-Wooden Pickaxe
-10 Iron / 1 Gcube
-A tool that can break wood and end stone faster.
-
-
-Stone Pickaxe
-10 Gold / 1 Gcube
-A tool that can break wood, end stone, and obsidian faster.
-
-
-Iron Pickaxe
-3 Emeralds / 1 Gcube
-A tool that can break wood, end stone, obsidian, and bedrock faster.
-
- Items (Item Shop)
- Name Cost Effect
- Balloons 40 Iron / 1 Gcube A throwable item that can lift you up in the air for a short time.
- Magic Milk 4 Gold / 1 Gcube A drinkable item that can prevent you from triggering enemy traps for 30 seconds.
- Dream Defender 120 Iron / 1 Gcube A summonable item that can spawn an iron golem to fight for you.
- Bridge Egg 2 Emeralds / 1 Gcube A throwable item that can create a bridge of wool where it lands.
- Sponge < 10 Gold / 1 Gcube
-A placeable item that can absorb water and lava around it.
-
-
- Upgrade your items and team perks
-Another way to improve your chances of winning Bed Wars is to upgrade your items and team perks with Gcubes and Keys. You can use Gcubes to buy better items from the Item Shop, such as balloons, pearls, fireballs, ender chests, etc. You can also use Keys to buy privileges from the Privilege Shop, such as VIP, MVP, and MVP+, which give you access to exclusive cosmetics, perks, and commands in the game. You can also use Diamonds to buy team upgrades from the Team Upgrade Shop, such as reinforced armor, sharpness, haste, heal pool, trap, and dragon buff. These upgrades can enhance your team's performance and abilities in the game.
- Communicate with your teammates
-The last but not least tip to win Bed Wars is to communicate with your teammates. Bed Wars is a team-based game, and you need to work together with your team to achieve your goals. You can use the chat or voice chat to communicate with your teammates and coordinate your actions. You can also use commands like /team or /party to create a private team or party with your friends. You can also use commands like /shout or /all to send messages to all players in the game. Communication is key to winning Bed Wars, and you should always be respectful and helpful to your teammates.
- Conclusion
-Bed Wars is a fun and exciting game that you can play on Roblox and Minecraft. It is a team-based PVP game where you have to protect your bed and destroy other beds. You can use blocks, tools, items, and strategies to play the game well. You can also use Bed Wars Mod APK to get unlimited Gcubes and Keys, which are the premium currencies in the game. With unlimited Gcubes and Keys, you can buy any item or privilege you want without spending any real money. You can also use them to upgrade your items and team perks faster and easier. However, you should still use them wisely and not waste them on unnecessary things. You should also follow some tips and tricks to win Bed Wars every time, such as protecting your bed, destroying other beds, upgrading your items and team perks, and communicating with your teammates. We hope this article has helped you learn more about Bed Wars Mod APK and how to get unlimited Gcubes and Keys. Have fun playing Bed Wars!
- FAQs
-Here are some frequently asked questions about Bed Wars Mod APK:
-
-Is Bed Wars Mod APK safe to use?
-Bed Wars Mod APK is safe to use as long as you download it from a reliable website that offers virus-free and malware-free files. However, you should always be careful when downloading and installing any modded or hacked apps on your device, as they may contain harmful or unwanted content. You should also backup your data before using Bed Wars Mod APK, in case something goes wrong.
- Is Bed Wars Mod APK legal to use?
-Bed Wars Mod APK is not legal to use, as it violates the terms of service of the original game developers. By using Bed Wars Mod APK, you are breaking the rules of the game and risking getting banned by the game developers. Therefore, we do not recommend using Bed Wars Mod APK for any purposes.
- Can I play Bed Wars Mod APK online with other players?
-Yes, you can play Bed Wars Mod APK online with other players who are also using the same modded version of the game. However, you cannot play Bed Wars Mod APK online with players who are using the original version of the game, as they are not compatible with each other.
- Can I update Bed Wars Mod APK?
-No, you cannot update Bed Wars Mod APK automatically or manually. If you want to update Bed Wars Mod APK, you have to download and install the latest version of the modded app from another source. However, updating Bed Wars Mod APK may cause some issues or errors in the game, so you should always backup your data before updating.
- Can I uninstall Bed Wars Mod APK?
-Yes, you can uninstall Bed Wars Mod APK anytime you want. To uninstall Bed Wars Mod APK, you just need to go to your device settings > apps > Bed Wars > uninstall and confirm the action. This will remove Bed Wars Mod APK from your device completely.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/loaders.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/loaders.py
deleted file mode 100644
index eb7ae50f34dd94e08d16951cbe75c9fb282a7868..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/loaders.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility functions to load from the checkpoints.
-Each checkpoint is a torch.saved dict with the following keys:
-- 'xp.cfg': the hydra config as dumped during training. This should be used
- to rebuild the object using the audiocraft.models.builders functions,
-- 'model_best_state': a readily loadable best state for the model, including
- the conditioner. The model obtained from `xp.cfg` should be compatible
- with this state dict. In the case of a LM, the encodec model would not be
- bundled along but instead provided separately.
-
-Those functions also support loading from a remote location with the Torch Hub API.
-They also support overriding some parameters, in particular the device and dtype
-of the returned model.
-"""
-
-from pathlib import Path
-from huggingface_hub import hf_hub_download
-import typing as tp
-import os
-
-from omegaconf import OmegaConf
-import torch
-
-from . import builders
-
-
-HF_MODEL_CHECKPOINTS_MAP = {
- "small": "facebook/musicgen-small",
- "medium": "facebook/musicgen-medium",
- "large": "facebook/musicgen-large",
- "melody": "facebook/musicgen-melody",
-}
-
-
-def _get_state_dict(
- file_or_url_or_id: tp.Union[Path, str],
- filename: tp.Optional[str] = None,
- device='cpu',
- cache_dir: tp.Optional[str] = None,
-):
- # Return the state dict either from a file or url
- file_or_url_or_id = str(file_or_url_or_id)
- assert isinstance(file_or_url_or_id, str)
-
- if os.path.isfile(file_or_url_or_id):
- return torch.load(file_or_url_or_id, map_location=device)
-
- elif file_or_url_or_id.startswith('https://'):
- return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True)
-
- elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP:
- assert filename is not None, "filename needs to be defined if using HF checkpoints"
-
- repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id]
- file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir)
- return torch.load(file, map_location=device)
-
- else:
- raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.")
-
-
-def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- model = builders.get_compression_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- return model
-
-
-def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
- pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir)
- cfg = OmegaConf.create(pkg['xp.cfg'])
- cfg.device = str(device)
- if cfg.device == 'cpu':
- cfg.transformer_lm.memory_efficient = False
- cfg.transformer_lm.custom = True
- cfg.dtype = 'float32'
- else:
- cfg.dtype = 'float16'
- model = builders.get_lm_model(cfg)
- model.load_state_dict(pkg['best_state'])
- model.eval()
- model.cfg = cfg
- return model
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/activations.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/autocast.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/lib/base64id.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/lib/base64id.js
deleted file mode 100644
index 15afe7453fefb19eeca19b42b6ee8b76fae492ac..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/base64id/lib/base64id.js
+++ /dev/null
@@ -1,103 +0,0 @@
-/*!
- * base64id v0.1.0
- */
-
-/**
- * Module dependencies
- */
-
-var crypto = require('crypto');
-
-/**
- * Constructor
- */
-
-var Base64Id = function() { };
-
-/**
- * Get random bytes
- *
- * Uses a buffer if available, falls back to crypto.randomBytes
- */
-
-Base64Id.prototype.getRandomBytes = function(bytes) {
-
- var BUFFER_SIZE = 4096
- var self = this;
-
- bytes = bytes || 12;
-
- if (bytes > BUFFER_SIZE) {
- return crypto.randomBytes(bytes);
- }
-
- var bytesInBuffer = parseInt(BUFFER_SIZE/bytes);
- var threshold = parseInt(bytesInBuffer*0.85);
-
- if (!threshold) {
- return crypto.randomBytes(bytes);
- }
-
- if (this.bytesBufferIndex == null) {
- this.bytesBufferIndex = -1;
- }
-
- if (this.bytesBufferIndex == bytesInBuffer) {
- this.bytesBuffer = null;
- this.bytesBufferIndex = -1;
- }
-
- // No buffered bytes available or index above threshold
- if (this.bytesBufferIndex == -1 || this.bytesBufferIndex > threshold) {
-
- if (!this.isGeneratingBytes) {
- this.isGeneratingBytes = true;
- crypto.randomBytes(BUFFER_SIZE, function(err, bytes) {
- self.bytesBuffer = bytes;
- self.bytesBufferIndex = 0;
- self.isGeneratingBytes = false;
- });
- }
-
- // Fall back to sync call when no buffered bytes are available
- if (this.bytesBufferIndex == -1) {
- return crypto.randomBytes(bytes);
- }
- }
-
- var result = this.bytesBuffer.slice(bytes*this.bytesBufferIndex, bytes*(this.bytesBufferIndex+1));
- this.bytesBufferIndex++;
-
- return result;
-}
-
-/**
- * Generates a base64 id
- *
- * (Original version from socket.io )
- */
-
-Base64Id.prototype.generateId = function () {
- var rand = Buffer.alloc(15); // multiple of 3 for base64
- if (!rand.writeInt32BE) {
- return Math.abs(Math.random() * Math.random() * Date.now() | 0).toString()
- + Math.abs(Math.random() * Math.random() * Date.now() | 0).toString();
- }
- this.sequenceNumber = (this.sequenceNumber + 1) | 0;
- rand.writeInt32BE(this.sequenceNumber, 11);
- if (crypto.randomBytes) {
- this.getRandomBytes(12).copy(rand);
- } else {
- // not secure for node 0.4
- [0, 4, 8].forEach(function(i) {
- rand.writeInt32BE(Math.random() * Math.pow(2, 32) | 0, i);
- });
- }
- return rand.toString('base64').replace(/\//g, '_').replace(/\+/g, '-');
-};
-
-/**
- * Export
- */
-
-exports = module.exports = new Base64Id();
diff --git a/spaces/flatindo/scaler/README.md b/spaces/flatindo/scaler/README.md
deleted file mode 100644
index f923d45ead2dd4dff1018916dcfc916b8eded71d..0000000000000000000000000000000000000000
--- a/spaces/flatindo/scaler/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: real ESRGAN 4x
-emoji: 💻
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: Cropinky/esrgan
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/flax-community/DietNerf-Demo/demo/src/utils.py b/spaces/flax-community/DietNerf-Demo/demo/src/utils.py
deleted file mode 100644
index 2db248be39ff3bbe8ac74f8dfd69205b3f12dd27..0000000000000000000000000000000000000000
--- a/spaces/flax-community/DietNerf-Demo/demo/src/utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import os
-from functools import partial
-import jax
-from jax import random
-import numpy as np
-from PIL import Image
-
-from jaxnerf.nerf import clip_utils
-from jaxnerf.nerf import utils
-from demo.src.config import NerfConfig
-from demo.src.models import init_model
-
-model, _ = init_model()
-
-
-def render_predict_from_pose(state, theta, phi, radius):
- rng = random.PRNGKey(0)
- partial_render_fn = partial(render_pfn, state.optimizer.target)
- rays = _render_rays_from_pose(theta, phi, radius)
- pred_color, pred_disp, _ = utils.render_image(
- partial_render_fn, rays,
- rng, False, chunk=NerfConfig.CHUNK)
- return pred_color, pred_disp
-
-
-def predict_to_image(pred_out) -> Image:
- image_arr = np.array(np.clip(pred_out, 0., 1.) * 255.).astype(np.uint8)
- return Image.fromarray(image_arr)
-
-
-def _render_rays_from_pose(theta, phi, radius):
- camtoworld = np.array(clip_utils.pose_spherical(radius, theta, phi))
- rays = _camtoworld_matrix_to_rays(camtoworld)
- return rays
-
-
-def _camtoworld_matrix_to_rays(camtoworld):
- """ render one instance of rays given a camera to world matrix (4, 4) """
- pixel_center = 0.
- w, h = NerfConfig.W, NerfConfig.H
- focal, downsample = NerfConfig.FOCAL, NerfConfig.DOWNSAMPLE
- x, y = np.meshgrid( # pylint: disable=unbalanced-tuple-unpacking
- np.arange(0, w, downsample, dtype=np.float32) + pixel_center, # X-Axis (columns)
- np.arange(0, h, downsample, dtype=np.float32) + pixel_center, # Y-Axis (rows)
- indexing="xy")
- camera_dirs = np.stack([(x - w * 0.5) / focal,
- -(y - h * 0.5) / focal,
- -np.ones_like(x)],
- axis=-1)
- directions = (camera_dirs[..., None, :] * camtoworld[None, None, :3, :3]).sum(axis=-1)
- origins = np.broadcast_to(camtoworld[None, None, :3, -1], directions.shape)
- viewdirs = directions / np.linalg.norm(directions, axis=-1, keepdims=True)
- return utils.Rays(origins=origins, directions=directions, viewdirs=viewdirs)
-
-
-def _render_fn(variables, key_0, key_1, rays):
- return jax.lax.all_gather(model.apply(
- variables, key_0, key_1, rays, False),
- axis_name="batch")
-
-
-render_pfn = jax.pmap(_render_fn, in_axes=(None, None, None, 0),
- donate_argnums=3, axis_name="batch")
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/dancewithonenpc.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/dancewithonenpc.py
deleted file mode 100644
index 1a8cb44c2406bbcc0ac38a0b6697bb5170a5e5d1..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/dancewithonenpc.py
+++ /dev/null
@@ -1,344 +0,0 @@
-from gym_minigrid.minigrid import *
-from gym_minigrid.register import register
-
-import time
-from collections import deque
-
-
-class Dancer(NPC):
- """
- A dancing NPC that the agent has to copy
- NPC executes a sequence of movement and utterances
- """
-
- def __init__(self, color, name, env, dancing_pattern=None,
- dance_len=3, p_sing=.5, hidden_npc=False, sing_only=False):
- super().__init__(color)
- self.name = name
- self.npc_dir = 1 # NPC initially looks downward
- self.npc_type = 0
- self.env = env
- self.actions = self.env.possible_actions
- self.p_sing = p_sing
- self.sing_only = sing_only
- if self.sing_only:
- p_sing = 1
- self.dancing_pattern = dancing_pattern if dancing_pattern else self._gen_dancing_pattern(dance_len, p_sing)
- self.agent_actions = deque(maxlen=len(self.dancing_pattern))
- self.movement_id_to_fun = {self.actions.left: self.rotate_left,
- self.actions.right: self.rotate_right,
- self.actions.forward: self.go_forward}
- # for vizualisation only
- self.movement_id_to_str = {self.actions.left: "left",
- self.actions.right: "right",
- self.actions.forward: "forward",
- self.actions.pickup: "pickup",
- self.actions.drop: "drop",
- self.actions.toggle: "toggle",
- self.actions.done: "done",
- None: "None"}
- self.dancing_step_idx = 0
- self.done_dancing = False
- self.add_npc_direction = True
- self.nb_steps = 0
- self.hidden_npc = hidden_npc
-
- def step(self, agent_action, agent_utterance):
- agent_matched_moves = False
- utterance = None
-
- if self.nb_steps == 0:
- utterance = "Look at me!"
- if self.nb_steps >= 2: # Wait a couple steps before dancing
- if not self.done_dancing:
- if self.dancing_step_idx == len(self.dancing_pattern):
- self.done_dancing = True
- utterance = "Now repeat my moves!"
- else:
- # NPC moves and speaks according to dance step
- move_id, utterance = self.dancing_pattern[self.dancing_step_idx]
- self.movement_id_to_fun[move_id]()
-
- self.dancing_step_idx += 1
- else: # record agent dancing pattern
- self.agent_actions.append((agent_action, agent_utterance))
-
- if not self.sing_only and list(self.agent_actions) == list(self.dancing_pattern):
- agent_matched_moves = True
- if self.sing_only: # only compare utterances
- if [x[1] for x in self.agent_actions] == [x[1] for x in self.dancing_pattern]:
- agent_matched_moves = True
-
- self.nb_steps += 1
- return agent_matched_moves, utterance
-
- def get_status_str(self):
- readable_dancing_pattern = [(self.movement_id_to_str[dp[0]], dp[1]) for dp in self.dancing_pattern]
- readable_agent_actions = [(self.movement_id_to_str[aa[0]], aa[1]) for aa in self.agent_actions]
- return "dance: {} \n agent: {}".format(readable_dancing_pattern, readable_agent_actions)
-
- def _gen_dancing_pattern(self, dance_len, p_sing):
- available_moves = [self.actions.left, self.actions.right, self.actions.forward]
- dance_pattern = []
- for _ in range(dance_len):
- move = self.env._rand_elem(available_moves)
- sing = None
- if np.random.random() < p_sing:
- sing = DanceWithOneNPCGrammar.random_utterance()
- dance_pattern.append((move, sing))
- return dance_pattern
-
- def can_overlap(self):
- # If the NPC is hidden, agent can overlap on it
- return self.hidden_npc
-
-
-
-class DanceWithOneNPCGrammar(object):
-
- templates = ["Move your", "Shake your"]
- things = ["body", "head"]
-
- grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)])
-
- @classmethod
- def construct_utterance(cls, action):
- return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " "
-
- @classmethod
- def random_utterance(cls):
- return np.random.choice(cls.templates) + " " + np.random.choice(cls.things) + " "
-
-
-
-class DanceActions(IntEnum):
- # Turn left, turn right, move forward
- left = 0
- right = 1
- forward = 2
-
-
-class DanceWithOneNPCEnv(MultiModalMiniGridEnv):
- """
- Environment in which the agent is instructed to go to a given object
- named using an English text string
- """
-
- def __init__(
- self,
- size=5,
- hear_yourself=False,
- diminished_reward=True,
- step_penalty=False,
- dance_len=3,
- hidden_npc=False,
- p_sing=.5,
- max_steps=20,
- full_obs=False,
- few_actions=False,
- sing_only=False
-
- ):
- assert size >= 5
- self.empty_symbol = "NA \n"
- self.hear_yourself = hear_yourself
- self.diminished_reward = diminished_reward
- self.step_penalty = step_penalty
- self.dance_len = dance_len
- self.hidden_npc = hidden_npc
- self.p_sing = p_sing
- self.few_actions = few_actions
- self.possible_actions = DanceActions if self.few_actions else MiniGridEnv.Actions
- self.sing_only = sing_only
- if max_steps is None:
- max_steps = 5*size**2
-
- super().__init__(
- grid_size=size,
- max_steps=max_steps,
- # Set this to True for maximum speed
- see_through_walls=True,
- full_obs=full_obs,
- actions=MiniGridEnv.Actions,
- action_space=spaces.MultiDiscrete([
- len(self.possible_actions),
- *DanceWithOneNPCGrammar.grammar_action_space.nvec
- ]),
- add_npc_direction=True
- )
-
- print({
- "size": size,
- "hear_yourself": hear_yourself,
- "diminished_reward": diminished_reward,
- "step_penalty": step_penalty,
- })
-
- def _gen_grid(self, width, height):
- # Create the grid
- self.grid = Grid(width, height, nb_obj_dims=4)
-
- # Randomly vary the room width and height
- width = self._rand_int(5, width+1)
- height = self._rand_int(5, height+1)
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, width, height)
-
- # Generate the surrounding walls
- self.grid.wall_rect(0, 0, width, height)
-
-
- # Set a randomly coloured Dancer NPC
- color = self._rand_elem(COLOR_NAMES)
- self.dancer = Dancer(color, "Ren", self, dance_len=self.dance_len,
- p_sing=self.p_sing, hidden_npc=self.hidden_npc, sing_only=self.sing_only)
-
- # Place it on the middle left side of the room
- left_pos = (int((width / 2) - 1), int(height / 2))
- #right_pos = [(width / 2) + 1, height / 2]
-
- self.grid.set(*left_pos, self.dancer)
- self.dancer.init_pos = left_pos
- self.dancer.cur_pos = left_pos
-
- # Place it randomly left or right
- #self.place_obj(self.dancer,
- # size=(width, height))
-
- # Randomize the agent's start position and orientation
- self.place_agent(size=(width, height))
-
- # Generate the mission string
- self.mission = 'watch dancer and repeat his moves afterwards'
-
- # Dummy beginning string
- self.beginning_string = "This is what you hear. \n"
- self.utterance = self.beginning_string
-
- # utterance appended at the end of each step
- self.utterance_history = ""
-
- # used for rendering
- self.conversation = self.utterance
- self.outcome_info = None
-
- def step(self, action):
- p_action = action[0] if np.isnan(action[0]) else int(action[0])
- if len(action) == 1: # agent cannot speak
- assert self.p_sing == 0, "Non speaking agent used in a dance env requiring to speak"
- utterance_action = [np.nan, np.nan]
- else:
- utterance_action = action[1:]
-
- obs, reward, done, info = super().step(p_action)
-
- if np.isnan(p_action):
- pass
-
-
- # assert all nan or neither nan
- assert len(set(np.isnan(utterance_action))) == 1
- speak_flag = not all(np.isnan(utterance_action))
-
- if speak_flag:
- utterance = DanceWithOneNPCGrammar.construct_utterance(utterance_action)
- self.conversation += "{}: {} \n".format("Agent", utterance)
-
- # Don't let the agent open any of the doors
- if not self.few_actions and p_action == self.actions.toggle:
- done = True
-
- if not self.few_actions and p_action == self.actions.done:
- done = True
-
- # npc's turn
- agent_matched_moves, npc_utterance = self.dancer.step(p_action if not np.isnan(p_action) else None,
- utterance if speak_flag else None)
- if self.hidden_npc:
- npc_utterance = None
- if npc_utterance:
- self.utterance += "{} \n".format(npc_utterance)
- self.conversation += "{}: {} \n".format(self.dancer.name, npc_utterance)
- if agent_matched_moves:
- reward = self._reward()
- self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1))
- done = True
-
- # discount
- if self.step_penalty:
- reward = reward - 0.01
-
- if self.hidden_npc:
- # remove npc from agent view
- npc_obs_idx = np.argwhere(obs['image'] == 11)
- if npc_obs_idx.size != 0: # agent sees npc
- obs['image'][npc_obs_idx[0][0], npc_obs_idx[0][1], :] = [1, 0, 0, 0]
-
- if done and reward == 0:
- self.outcome_info = "FAILURE: agent got {} reward \n".format(reward)
-
- # fill observation with text
- self.append_existing_utterance_to_history()
- obs = self.add_utterance_to_observation(obs)
- self.reset_utterance()
-
- return obs, reward, done, info
-
-
- def _reward(self):
- if self.diminished_reward:
- return super()._reward()
- else:
- return 1.0
-
- def render(self, *args, **kwargs):
- obs = super().render(*args, **kwargs)
-
- print("conversation:\n", self.conversation)
- print("utterance_history:\n", self.utterance_history)
-
- self.window.clear_text() # erase previous text
-
- self.window.set_caption(self.conversation) # overwrites super class caption
- self.window.ax.set_title(self.dancer.get_status_str(), loc="left", fontsize=10)
- if self.outcome_info:
- color = None
- if "SUCCESS" in self.outcome_info:
- color = "lime"
- elif "FAILURE" in self.outcome_info:
- color = "red"
- self.window.add_text(*(0.01, 0.85, self.outcome_info),
- **{'fontsize':15, 'color':color, 'weight':"bold"})
-
- self.window.show_img(obs) # re-draw image to add changes to window
-
- return obs
-
-
-
-
-class DanceWithOneNPC8x8Env(DanceWithOneNPCEnv):
- def __init__(self, **kwargs):
- super().__init__(size=8, **kwargs)
-
-class DanceWithOneNPC6x6Env(DanceWithOneNPCEnv):
- def __init__(self, **kwargs):
- super().__init__(size=6, **kwargs)
-
-
-
-register(
- id='MiniGrid-DanceWithOneNPC-5x5-v0',
- entry_point='gym_minigrid.envs:DanceWithOneNPCEnv'
-)
-
-register(
- id='MiniGrid-DanceWithOneNPC-6x6-v0',
- entry_point='gym_minigrid.envs:DanceWithOneNPC6x6Env'
-)
-
-register(
- id='MiniGrid-DanceWithOneNPC-8x8-v0',
- entry_point='gym_minigrid.envs:DanceWithOneNPC8x8Env'
-)
\ No newline at end of file
diff --git a/spaces/gagan3012/summarization/t5s/__main__.py b/spaces/gagan3012/summarization/t5s/__main__.py
deleted file mode 100644
index 4e28416e104515e90fca4b69cc60d0c61fd15d61..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/summarization/t5s/__main__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .cli import main
-
-main()
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/logging.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/logging.py
deleted file mode 100644
index 4aa0e04bb9b3ab2a4bfbc4def50404ccbac2c6e6..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/logging.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-
-import torch.distributed as dist
-
-logger_initialized = {}
-
-
-def get_logger(name, log_file=None, log_level=logging.INFO, file_mode='w'):
- """Initialize and get a logger by name.
-
- If the logger has not been initialized, this method will initialize the
- logger by adding one or two handlers, otherwise the initialized logger will
- be directly returned. During initialization, a StreamHandler will always be
- added. If `log_file` is specified and the process rank is 0, a FileHandler
- will also be added.
-
- Args:
- name (str): Logger name.
- log_file (str | None): The log filename. If specified, a FileHandler
- will be added to the logger.
- log_level (int): The logger level. Note that only the process of
- rank 0 is affected, and other processes will set the level to
- "Error" thus be silent most of the time.
- file_mode (str): The file mode used in opening log file.
- Defaults to 'w'.
-
- Returns:
- logging.Logger: The expected logger.
- """
- logger = logging.getLogger(name)
- if name in logger_initialized:
- return logger
- # handle hierarchical names
- # e.g., logger "a" is initialized, then logger "a.b" will skip the
- # initialization since it is a child of "a".
- for logger_name in logger_initialized:
- if name.startswith(logger_name):
- return logger
-
- # handle duplicate logs to the console
- # Starting in 1.8.0, PyTorch DDP attaches a StreamHandler (NOTSET)
- # to the root logger. As logger.propagate is True by default, this root
- # level handler causes logging messages from rank>0 processes to
- # unexpectedly show up on the console, creating much unwanted clutter.
- # To fix this issue, we set the root logger's StreamHandler, if any, to log
- # at the ERROR level.
- for handler in logger.root.handlers:
- if type(handler) is logging.StreamHandler:
- handler.setLevel(logging.ERROR)
-
- stream_handler = logging.StreamHandler()
- handlers = [stream_handler]
-
- if dist.is_available() and dist.is_initialized():
- rank = dist.get_rank()
- else:
- rank = 0
-
- # only rank 0 will add a FileHandler
- if rank == 0 and log_file is not None:
- # Here, the default behaviour of the official logger is 'a'. Thus, we
- # provide an interface to change the file mode to the default
- # behaviour.
- file_handler = logging.FileHandler(log_file, file_mode)
- handlers.append(file_handler)
-
- formatter = logging.Formatter(
- '%(asctime)s - %(name)s - %(levelname)s - %(message)s')
- for handler in handlers:
- handler.setFormatter(formatter)
- handler.setLevel(log_level)
- logger.addHandler(handler)
-
- if rank == 0:
- logger.setLevel(log_level)
- else:
- logger.setLevel(logging.ERROR)
-
- logger_initialized[name] = True
-
- return logger
-
-
-def print_log(msg, logger=None, level=logging.INFO):
- """Print a log message.
-
- Args:
- msg (str): The message to be logged.
- logger (logging.Logger | str | None): The logger to be used.
- Some special loggers are:
- - "silent": no message will be printed.
- - other str: the logger obtained with `get_root_logger(logger)`.
- - None: The `print()` method will be used to print log messages.
- level (int): Logging level. Only available when `logger` is a Logger
- object or "root".
- """
- if logger is None:
- print(msg)
- elif isinstance(logger, logging.Logger):
- logger.log(level, msg)
- elif logger == 'silent':
- pass
- elif isinstance(logger, str):
- _logger = get_logger(logger)
- _logger.log(level, msg)
- else:
- raise TypeError(
- 'logger should be either a logging.Logger object, str, '
- f'"silent" or None, but got {type(logger)}')
diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/unet/__init__.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/unet/__init__.py
deleted file mode 100644
index 8b9a367cd7338999a742961fbc1a93289a6380da..0000000000000000000000000000000000000000
--- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/unet/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .model import Unet
diff --git a/spaces/gigant/romanian-whisper/README.md b/spaces/gigant/romanian-whisper/README.md
deleted file mode 100644
index ce1362015a9f479345ad95a9f01a48a5a98b58bb..0000000000000000000000000000000000000000
--- a/spaces/gigant/romanian-whisper/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Romanian Whisper Demo
-emoji: 🤫🇷🇴
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-tags:
-- whisper-event
-duplicated_from: whisper-event/whisper-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/giskardai/giskard/README.md b/spaces/giskardai/giskard/README.md
deleted file mode 100644
index 7a2f840669547f759d4adb5e218e8bf032cabab9..0000000000000000000000000000000000000000
--- a/spaces/giskardai/giskard/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Giskard Hub
-emoji: 🐢
-colorFrom: green
-colorTo: green
-sdk: docker
-app_port: 7860
-pinned: false
-duplicated_from: giskardai/giskard
----
-Welcome to Giskard !
-You can modify this space to suit your needs.
-
-The goal for you is to upload your model so you can inspect and test it thanks to Giskard.
-
-In the Dockerfile you can change the PYTHON_VERSION to any python version between 3.7 and 3.10 !
-The default value is 3.7.13. If you don't have specific need we advise you to stay with this version because the build will be quicker.
-
-In the requirements.txt file you can add every python package your model needs. Be carefull, the package giskard is mandatory.
-
-Finally the main code will be in the project.py file. You can choose the name, description of your project.
-Then you can upload your model and dataset, by following the Giskard documentation https://docs.giskard.ai/start/guides/upload-your-model.
-
-As an exemple, we already built one model and dataset in the project.py file. Feel free to delete it.
-If you have any question, you can contact us on linkedin https://www.linkedin.com/in/alexcombessie/
\ No newline at end of file
diff --git a/spaces/giswqs/solara-demo/pages/07_ipyleaflet.py b/spaces/giswqs/solara-demo/pages/07_ipyleaflet.py
deleted file mode 100644
index b01f9a17c8f95efa5ffd96437dd44ddeeaa11e76..0000000000000000000000000000000000000000
--- a/spaces/giswqs/solara-demo/pages/07_ipyleaflet.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import ipyleaflet
-import solara
-import ipywidgets as widgets
-
-zoom = solara.reactive(2)
-center = solara.reactive((20, 0))
-
-
-class Map(ipyleaflet.Map):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
- self.layout.height = '600px'
- # Add what you want below
-
- label = widgets.Label('Clicked location')
- output = widgets.Output()
- widget = widgets.VBox([label, output])
- control = ipyleaflet.WidgetControl(widget=widget, position='bottomright')
- self.add_control(control)
-
- def handle_interaction(**kwargs):
- latlon = kwargs.get("coordinates")
- if kwargs.get("type") == "click":
- with output:
- output.clear_output()
- print(latlon)
-
- self.on_interaction(handle_interaction)
-
-
-@solara.component
-def Page():
- with solara.Column(style={"min-width": "500px"}):
- solara.SliderInt(label="Zoom level", value=zoom, min=1, max=20)
- Map.element(
- zoom=zoom.value,
- on_zoom=zoom.set,
- center=center.value,
- on_center=center.set,
- scroll_wheel_zoom=True,
-
- )
- solara.Text(f"Zoom: {zoom.value}")
- solara.Text(f"Center: {center.value}")
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Beyonce I Miss You Mp3 Skull How to Get the Song on Your Device.md b/spaces/gotiQspiryo/whisper-ui/examples/Beyonce I Miss You Mp3 Skull How to Get the Song on Your Device.md
deleted file mode 100644
index 052d134fcb5466043c25f2605a07d85591e8fa0a..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Beyonce I Miss You Mp3 Skull How to Get the Song on Your Device.md
+++ /dev/null
@@ -1,6 +0,0 @@
-beyonce i miss you mp3 skull DOWNLOAD ○○○ https://urlgoal.com/2uyLXX
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py b/spaces/gradio/HuBERT/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py
deleted file mode 100644
index 079db13e61c5ef46d1b1d288012145148eb0be04..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss
-
-
-@register_criterion("label_smoothed_cross_entropy_r3f")
-class LabelSmoothedCrossEntropyR3FCriterion(FairseqCriterion):
- def __init__(
- self, task, sentence_avg, label_smoothing, eps, r3f_lambda, noise_type
- ):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- self.label_smoothing = label_smoothing
- self.eps = eps
- self.r3f_lambda = r3f_lambda
- self.noise_type = noise_type
- if self.noise_type in {"normal"}:
- self.noise_sampler = torch.distributions.normal.Normal(
- loc=0.0, scale=self.eps
- )
- elif self.noise_type == "uniform":
- self.noise_sampler = torch.distributions.uniform.Uniform(
- low=-self.eps, high=self.eps
- )
- else:
- raise Exception(f"unrecognized noise type {self.noise_type}")
-
- @staticmethod
- def add_args(parser):
- """Add criterion-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--label-smoothing', default=0., type=float, metavar='D',
- help='epsilon for label smoothing, 0 means no label smoothing')
- parser.add_argument('--eps', type=float, default=1e-5,
- help='noise eps')
- parser.add_argument('--r3f-lambda', type=float, default=1.0,
- help='lambda for combining logistic loss and noisy KL loss')
- parser.add_argument('--noise-type', type=str, default='normal',
- choices=['normal', 'uniform'],
- help='type of noises')
- # fmt: on
-
- def _get_symm_kl(self, noised_logits, input_logits):
- return (
- F.kl_div(
- F.log_softmax(noised_logits, dim=-1, dtype=torch.float32),
- F.softmax(input_logits, dim=-1, dtype=torch.float32),
- None,
- None,
- "sum",
- )
- + F.kl_div(
- F.log_softmax(input_logits, dim=-1, dtype=torch.float32),
- F.softmax(noised_logits, dim=-1, dtype=torch.float32),
- None,
- None,
- "sum",
- )
- ) / noised_logits.size(0)
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- token_embeddings = model.encoder.embed_tokens(sample["net_input"]["src_tokens"])
- input_logits, extra = model(**sample["net_input"])
- loss, nll_loss = self.compute_loss(
- model, (input_logits, extra), sample, reduce=reduce
- )
- sample_size = (
- sample["target"].size(0) if self.sentence_avg else sample["ntokens"]
- )
-
- if model.training:
- noise = self.noise_sampler.sample(sample_shape=token_embeddings.shape).to(
- token_embeddings
- )
- noised_embeddings = token_embeddings.clone() + noise
-
- noised_logits, _ = model(
- **sample["net_input"], token_embeddings=noised_embeddings
- )
- symm_kl = self._get_symm_kl(noised_logits, input_logits)
-
- if model.training:
- symm_kl = symm_kl * sample_size
- loss = loss + self.r3f_lambda * symm_kl
-
- logging_output = {
- "loss": loss.data,
- "nll_loss": nll_loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["target"].size(0),
- "sample_size": sample_size,
- }
-
- if model.training:
- logging_output.update(
- symm_kl=utils.item(symm_kl.data) if reduce else symm_kl.data
- )
-
- return loss, sample_size, logging_output
-
- def compute_loss(self, model, net_output, sample, reduce=True):
- lprobs = model.get_normalized_probs(net_output, log_probs=True)
- lprobs = lprobs.view(-1, lprobs.size(-1))
- target = model.get_targets(sample, net_output).view(-1, 1)
- loss, nll_loss = label_smoothed_nll_loss(
- lprobs,
- target,
- self.label_smoothing,
- ignore_index=self.padding_idx,
- reduce=reduce,
- )
- return loss, nll_loss
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
- symm_kl_sum = sum(log.get("symm_kl", 0) for log in logging_outputs)
-
- metrics.log_scalar("symm_kl", symm_kl_sum / sample_size, sample_size, round=3)
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- metrics.log_scalar(
- "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/gstaff/mp4-converter/README.md b/spaces/gstaff/mp4-converter/README.md
deleted file mode 100644
index 0a5dad1c4866ff31dce952fbd7aff7ea13cf05d5..0000000000000000000000000000000000000000
--- a/spaces/gstaff/mp4-converter/README.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: Mp4 Converter
-emoji: 🎥
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.48.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-# 2023-10-16 Daily Demo - MP4 Converter
-
-A daily demo space created by [@gstaff](https://huggingface.co/gstaff).
-
-## Description
-A utility to convert mp4 files to animated gifs.
-
-
-
-## Credits
-Example video from gradio docs.
\ No newline at end of file
diff --git a/spaces/guymorlan/Arabic2Taatik/app.py b/spaces/guymorlan/Arabic2Taatik/app.py
deleted file mode 100644
index a06fbb067ecb16afaea50b6d5d07f538ca5a8f2a..0000000000000000000000000000000000000000
--- a/spaces/guymorlan/Arabic2Taatik/app.py
+++ /dev/null
@@ -1,241 +0,0 @@
-from torch import nn
-from transformers import CanineModel, CanineForTokenClassification, CaninePreTrainedModel, CanineTokenizer
-from transformers.modeling_outputs import TokenClassifierOutput
-import gradio as gr
-
-
-arabic_to_hebrew = {
- # regular letters
- "ا": "א", "أ": "א", "إ": "א", "ء": "א", "ئ": "א", "ؤ": "א",
- "آ": "אא", "ى": "א", "ب": "ב", "ت": "ת", "ث": "ת'", "ج": "ג'",
- "ح": "ח", "خ": "ח'", "د": "ד", "ذ": "ד'", "ر": "ר", "ز": "ז",
- "س": "ס", "ش": "ש", "ص": "צ", "ض": "צ'", "ط": "ט", "ظ": "ט'",
- "ع": "ע", "غ": "ע'", "ف": "פ", "ق": "ק", "ك": "כ", "ل": "ל",
- "م": "מ", "ن": "נ", "ه": "ה", "و": "ו", "ي": "י", "ة": "ה",
- # special characters
- "،": ",", "َ": "ַ", "ُ": "ֻ", "ِ": "ִ",
-}
-
-final_letters = {
- "ن": "ן", "م": "ם", "ص": "ץ", "ض": "ץ'", "ف": "ף",
-}
-
-def to_taatik(arabic):
- taatik = []
- for index, letter in enumerate(arabic):
- if (
- (index == len(arabic) - 1 or arabic[index + 1] in {" ", ".", "،"}) and
- letter in final_letters
- ):
- taatik.append(final_letters[letter])
- elif letter not in arabic_to_hebrew:
- taatik.append(letter)
- else:
- taatik.append(arabic_to_hebrew[letter])
- return taatik
-
-
-class TaatikModel(CaninePreTrainedModel):
- # based on CaninePreTrainedModel
- # slightly modified for multilabel classification
-
- def __init__(self, config, num_labels=7):
- # Note: one label for each nikud type, plus one for the deletion flag
- super().__init__(config)
- config.num_labels = num_labels
- self.num_labels = config.num_labels
-
- self.canine = CanineModel(config)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, config.num_labels)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- self.criterion = nn.BCEWithLogitsLoss()
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- output_attentions=None,
- output_hidden_states=None,
- ):
-
- outputs = self.canine(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states
- )
-
- sequence_output = outputs[0]
-
- sequence_output = self.dropout(sequence_output)
- logits = self.classifier(sequence_output)
-
- loss = None
- if labels is not None:
- # print(logits)
- # print("-----------")
- # print(labels)
- loss = self.criterion(logits, labels)
-
- return TokenClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-# tokenizer = CanineTokenizer.from_pretrained("google/canine-c")
-# model = TashkeelModel.from_pretrained("google/canine-c")
-
-tokenizer = CanineTokenizer.from_pretrained("google/canine-s")
-# model = TaatikModel.from_pretrained("google/canine-s")
-# model = TaatikModel.from_pretrained("./checkpoint-19034/")
-model = TaatikModel.from_pretrained("guymorlan/Arabic2Taatik")
-
-
-def convert_nikkud_to_harakat(nikkud):
- labels = []
- if "SHADDA" in nikkud:
- labels.append("SHADDA")
- if "TSERE" in nikkud:
- labels.append("KASRA")
- if "HOLAM" in nikkud:
- labels.append("DAMMA")
- if "PATACH" in nikkud:
- labels.append("FATHA")
- if "SHVA" in nikkud:
- labels.append("SUKUN")
- if "KUBUTZ" in nikkud:
- labels.append("DAMMA")
- if "HIRIQ" in nikkud:
- labels.append("KASRA")
- return labels
-
-def convert_binary_to_labels(binary_labels):
- labels = []
- if binary_labels[0] == 1:
- labels.append("SHADDA")
- if binary_labels[1] == 1:
- labels.append("TSERE")
- if binary_labels[2] == 1:
- labels.append("HOLAM")
- if binary_labels[3] == 1:
- labels.append("PATACH")
- if binary_labels[4] == 1:
- labels.append("SHVA")
- if binary_labels[5] == 1:
- labels.append("KUBUTZ")
- if binary_labels[6] == 1:
- labels.append("HIRIQ")
- return labels
-
-def convert_label_names_to_chars(label):
- if label == "SHADDA":
- return "ّ"
- if label == "TSERE":
- return "ֵ"
- if label == "HOLAM":
- return "ֹ"
- if label == "PATACH":
- return "ַ"
- if label == "SHVA":
- return "ְ"
- if label == "KUBUTZ":
- return "ֻ"
- if label == "HIRIQ":
- return "ִ"
-
- # for these, return arabic harakat
- if label == "DAMMA":
- return "ُ"
- if label == "KASRA":
- return "ِ"
- if label == "FATHA":
- return "َ"
- if label == "SUKUN":
- return "ْ"
- return ""
-
-def predict(input, prefix = "P "):
- print(input)
- input_tok = tokenizer(prefix+input, return_tensors="pt")
- print(input_tok)
- outputs = model(**input_tok)
- print(outputs)
- labels = outputs.logits.sigmoid().round().int()
- labels = labels.tolist()[0][3:-1]
- print(labels)
- labels_hebrew = [convert_binary_to_labels(x) for x in labels]
- labels_arabic = [convert_nikkud_to_harakat(x) for x in labels_hebrew]
- print(f"labels_hebrew: {labels_hebrew}")
- print(f"labels_arabic: {labels_arabic}")
-
- hebrew = [[x] for x in to_taatik(input)]
- print(hebrew)
- arabic = [[x] for x in input]
- print(arabic)
-
- print(f"len hebrew: {len(hebrew)}")
- print(f"len arabic: {len(arabic)}")
- print(f"len labels_hebrew: {len(labels_hebrew)}")
- print(f"len labels_arabic: {len(labels_arabic)}")
- print(f"labels: {labels}")
- print(f"labels_hebrew: {labels_hebrew}")
- print(f"labels_arabic: {labels_arabic}")
-
- for i in range(len(hebrew)):
- hebrew[i].extend([convert_label_names_to_chars(x) for x in labels_hebrew[i]])
- arabic[i].extend([convert_label_names_to_chars(x) for x in labels_arabic[i]])
-
-
- hebrew = ["".join(x) for x in hebrew]
- arabic = ["".join(x) for x in arabic]
-
- # loop over hebrew, if there is a ' in the second position move it to last position
- for i in range(len(hebrew)):
- if len(hebrew[i]) > 1 and hebrew[i][1] == "'":
- hebrew[i] = hebrew[i][0] + hebrew[i][2:] + hebrew[i][1]
-
- hebrew = "".join(hebrew)
- arabic = "".join(arabic)
-
-
- return f"{hebrew}
{arabic}
"
-
- font = "Arial Unicode MS, Tahoma, sans-serif"
- return f"{hebrew}
{arabic}
"
- return f"{hebrew}
{arabic}
"
-
- # return f"{hebrew}
{arabic}
"
-
-font_url = " "
-
-with gr.Blocks(theme=gr.themes.Soft(), title="Ammiya Diacritizer") as demo:
- gr.HTML("Colloquial Arabic Diacritizer and Hebrew Transliterator" + font_url)
- with gr.Row():
- with gr.Column():
- input = gr.Textbox(label="Input", placeholder="Enter Arabic text", lines=1)
- gr.Examples(["بديش اروح معك"], input)
- btn = gr.Button(label="Analyze")
- with gr.Column():
- with gr.Box():
- html = gr.HTML()
- btn.click(predict, inputs=[input], outputs=[html])
- input.submit(predict, inputs = [input], outputs=[html])
-
- demo.load()
- demo.launch()
-
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/viz/pose_widget.py b/spaces/gwang-kim/DATID-3D/eg3d/viz/pose_widget.py
deleted file mode 100644
index bcb1f1715e1021adf928df2b931a1f23d336275f..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/viz/pose_widget.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-import numpy as np
-import imgui
-import dnnlib
-from gui_utils import imgui_utils
-
-#----------------------------------------------------------------------------
-
-class PoseWidget:
- def __init__(self, viz):
- self.viz = viz
- self.pose = dnnlib.EasyDict(yaw=0, pitch=0, anim=False, speed=0.25)
- self.pose_def = dnnlib.EasyDict(self.pose)
-
- self.lookat_point_choice = 0
- self.lookat_point_option = ['auto', 'ffhq', 'shapenet', 'afhq', 'manual']
- self.lookat_point_labels = ['Auto Detect', 'FFHQ Default', 'Shapenet Default', 'AFHQ Default', 'Manual']
- self.lookat_point = (0.0, 0.0, 0.2)
-
- def drag(self, dx, dy):
- viz = self.viz
- self.pose.yaw += -dx / viz.font_size * 3e-2
- self.pose.pitch += -dy / viz.font_size * 3e-2
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
- if show:
- imgui.text('Pose')
- imgui.same_line(viz.label_w)
- yaw = self.pose.yaw
- pitch = self.pose.pitch
- with imgui_utils.item_width(viz.font_size * 5):
- changed, (new_yaw, new_pitch) = imgui.input_float2('##pose', yaw, pitch, format='%+.2f', flags=imgui.INPUT_TEXT_ENTER_RETURNS_TRUE)
- if changed:
- self.pose.yaw = new_yaw
- self.pose.pitch = new_pitch
- imgui.same_line(viz.label_w + viz.font_size * 13 + viz.spacing * 2)
- _clicked, dragging, dx, dy = imgui_utils.drag_button('Drag', width=viz.button_w)
- if dragging:
- self.drag(dx, dy)
- imgui.same_line()
- snapped = dnnlib.EasyDict(self.pose, yaw=round(self.pose.yaw, 1), pitch=round(self.pose.pitch, 1))
- if imgui_utils.button('Snap', width=viz.button_w, enabled=(self.pose != snapped)):
- self.pose = snapped
- imgui.same_line()
- if imgui_utils.button('Reset', width=-1, enabled=(self.pose != self.pose_def)):
- self.pose = dnnlib.EasyDict(self.pose_def)
-
- # New line starts here
- imgui.text('LookAt Point')
- imgui.same_line(viz.label_w)
- with imgui_utils.item_width(viz.font_size * 8):
- _clicked, self.lookat_point_choice = imgui.combo('', self.lookat_point_choice, self.lookat_point_labels)
- lookat_point = self.lookat_point_option[self.lookat_point_choice]
- if lookat_point == 'auto':
- self.lookat_point = None
- if lookat_point == 'ffhq':
- self.lookat_point = (0.0, 0.0, 0.2)
- changes_enabled=False
- if lookat_point == 'shapenet':
- self.lookat_point = (0.0, 0.0, 0.0)
- changes_enabled=False
- if lookat_point == 'afhq':
- self.lookat_point = (0.0, 0.0, 0.0)
- changes_enabled=False
- if lookat_point == 'manual':
- if self.lookat_point is None:
- self.lookat_point = (0.0, 0.0, 0.0)
- changes_enabled=True
- if lookat_point != 'auto':
- imgui.same_line(viz.label_w + viz.font_size * 13 + viz.spacing * 2)
- with imgui_utils.item_width(viz.font_size * 16):
- with imgui_utils.grayed_out(not changes_enabled):
- _changed, self.lookat_point = imgui.input_float3('##lookat', *self.lookat_point, format='%.2f', flags=(imgui.INPUT_TEXT_READ_ONLY if not changes_enabled else 0))
-
-
- viz.args.yaw = self.pose.yaw
- viz.args.pitch = self.pose.pitch
-
- viz.args.lookat_point = self.lookat_point
-
-#----------------------------------------------------------------------------
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py
deleted file mode 100644
index a828023e115243e48918538d31b91d662cd12d0f..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/criteria/id_loss.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import torch
-from torch import nn
-
-from models.facial_recognition.model_irse import Backbone
-
-
-class IDLoss(nn.Module):
- def __init__(self, opts):
- super(IDLoss, self).__init__()
- print('Loading ResNet ArcFace')
- self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se')
- self.facenet.load_state_dict(torch.load(opts.ir_se50_weights))
- self.pool = torch.nn.AdaptiveAvgPool2d((256, 256))
- self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112))
- self.facenet.eval()
- self.opts = opts
-
- def extract_feats(self, x):
- if x.shape[2] != 256:
- x = self.pool(x)
- x = x[:, :, 35:223, 32:220] # Crop interesting region
- x = self.face_pool(x)
- x_feats = self.facenet(x)
- return x_feats
-
- def forward(self, y_hat, y):
- n_samples = y.shape[0]
- y_feats = self.extract_feats(y) # Otherwise use the feature from there
- y_hat_feats = self.extract_feats(y_hat)
- y_feats = y_feats.detach()
- loss = 0
- sim_improvement = 0
- count = 0
- for i in range(n_samples):
- diff_target = y_hat_feats[i].dot(y_feats[i])
- loss += 1 - diff_target
- count += 1
-
- return loss / count, sim_improvement / count
diff --git a/spaces/h2oai/h2ogpt-chatbot2/src/gradio_utils/__init__.py b/spaces/h2oai/h2ogpt-chatbot2/src/gradio_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hack46/46jobs/__init__.py b/spaces/hack46/46jobs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/haonanzhang/ChatGPT-BOT/run_Linux.sh b/spaces/haonanzhang/ChatGPT-BOT/run_Linux.sh
deleted file mode 100644
index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000
--- a/spaces/haonanzhang/ChatGPT-BOT/run_Linux.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$0")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir"
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/__init__.py
deleted file mode 100644
index 156eed9099de590919c6cc48b71c3e7efe9628cd..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/__init__.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from maskrcnn_benchmark.data import datasets
-
-from .coco import coco_evaluation
-from .voc import voc_evaluation
-from .vg import vg_evaluation
-from .box_aug import im_detect_bbox_aug
-from .od_to_grounding import od_to_grounding_evaluation
-
-
-def evaluate(dataset, predictions, output_folder, **kwargs):
- """evaluate dataset using different methods based on dataset type.
- Args:
- dataset: Dataset object
- predictions(list[BoxList]): each item in the list represents the
- prediction results for one image.
- output_folder: output folder, to save evaluation files or results.
- **kwargs: other args.
- Returns:
- evaluation result
- """
- args = dict(
- dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs
- )
- if isinstance(dataset, datasets.COCODataset) or isinstance(dataset, datasets.TSVDataset):
- return coco_evaluation(**args)
- # elif isinstance(dataset, datasets.VGTSVDataset):
- # return vg_evaluation(**args)
- elif isinstance(dataset, datasets.PascalVOCDataset):
- return voc_evaluation(**args)
- elif isinstance(dataset, datasets.CocoDetectionTSV):
- return od_to_grounding_evaluation(**args)
- elif isinstance(dataset, datasets.LvisDetection):
- pass
- else:
- dataset_name = dataset.__class__.__name__
- raise NotImplementedError("Unsupported dataset type {}.".format(dataset_name))
-
-
-def evaluate_mdetr(dataset, predictions, output_folder, cfg):
-
- args = dict(
- dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs
- )
- if isinstance(dataset, datasets.COCODataset) or isinstance(dataset, datasets.TSVDataset):
- return coco_evaluation(**args)
- # elif isinstance(dataset, datasets.VGTSVDataset):
- # return vg_evaluation(**args)
- elif isinstance(dataset, datasets.PascalVOCDataset):
- return voc_evaluation(**args)
- elif isinstance(dataset, datasets.CocoDetectionTSV):
- return od_to_grounding_evaluation(**args)
- elif isinstance(dataset, datasets.LvisDetection):
- pass
- else:
- dataset_name = dataset.__class__.__name__
- raise NotImplementedError("Unsupported dataset type {}.".format(dataset_name))
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/densepose.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/densepose.py
deleted file mode 100644
index f2e77dc2d8e0f8c041ac1217978c639a826f0857..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/densepose.py
+++ /dev/null
@@ -1,593 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import numpy as np
-from typing import Iterable, Optional, Tuple
-import cv2
-
-from ..data.structures import DensePoseDataRelative, DensePoseOutput, DensePoseResult
-from .base import Boxes, Image, MatrixVisualizer, PointsVisualizer
-
-
-class DensePoseResultsVisualizer(object):
- def visualize(self, image_bgr: Image, densepose_result: Optional[DensePoseResult]) -> Image:
- if densepose_result is None:
- return image_bgr
- context = self.create_visualization_context(image_bgr)
- for i, result_encoded_w_shape in enumerate(densepose_result.results):
- iuv_arr = DensePoseResult.decode_png_data(*result_encoded_w_shape)
- bbox_xywh = densepose_result.boxes_xywh[i]
- self.visualize_iuv_arr(context, iuv_arr, bbox_xywh)
- image_bgr = self.context_to_image_bgr(context)
- return image_bgr
-
-
-class DensePoseMaskedColormapResultsVisualizer(DensePoseResultsVisualizer):
- def __init__(
- self,
- data_extractor,
- segm_extractor,
- inplace=True,
- cmap=cv2.COLORMAP_PARULA,
- alpha=0.7,
- val_scale=1.0,
- ):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=val_scale, alpha=alpha
- )
- self.data_extractor = data_extractor
- self.segm_extractor = segm_extractor
-
- def create_visualization_context(self, image_bgr: Image):
- return image_bgr
-
- def context_to_image_bgr(self, context):
- return context
-
- def get_image_bgr_from_context(self, context):
- return context
-
- def visualize_iuv_arr(self, context, iuv_arr, bbox_xywh):
- image_bgr = self.get_image_bgr_from_context(context)
- matrix = self.data_extractor(iuv_arr)
- segm = self.segm_extractor(iuv_arr)
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[segm > 0] = 1
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh)
- return image_bgr
-
-
-def _extract_i_from_iuvarr(iuv_arr):
- return iuv_arr[0, :, :]
-
-
-def _extract_u_from_iuvarr(iuv_arr):
- return iuv_arr[1, :, :]
-
-
-def _extract_v_from_iuvarr(iuv_arr):
- return iuv_arr[2, :, :]
-
-
-class DensePoseResultsMplContourVisualizer(DensePoseResultsVisualizer):
- def __init__(self, levels=10, **kwargs):
- self.levels = levels
- self.plot_args = kwargs
-
- def create_visualization_context(self, image_bgr: Image):
- import matplotlib.pyplot as plt
- from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
-
- context = {}
- context["image_bgr"] = image_bgr
- dpi = 100
- height_inches = float(image_bgr.shape[0]) / dpi
- width_inches = float(image_bgr.shape[1]) / dpi
- fig = plt.figure(figsize=(width_inches, height_inches), dpi=dpi)
- plt.axes([0, 0, 1, 1])
- plt.axis("off")
- context["fig"] = fig
- canvas = FigureCanvas(fig)
- context["canvas"] = canvas
- extent = (0, image_bgr.shape[1], image_bgr.shape[0], 0)
- plt.imshow(image_bgr[:, :, ::-1], extent=extent)
- return context
-
- def context_to_image_bgr(self, context):
- fig = context["fig"]
- w, h = map(int, fig.get_size_inches() * fig.get_dpi())
- canvas = context["canvas"]
- canvas.draw()
- image_1d = np.fromstring(canvas.tostring_rgb(), dtype="uint8")
- image_rgb = image_1d.reshape(h, w, 3)
- image_bgr = image_rgb[:, :, ::-1].copy()
- return image_bgr
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> Image:
- import matplotlib.pyplot as plt
-
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- extent = (
- bbox_xywh[0],
- bbox_xywh[0] + bbox_xywh[2],
- bbox_xywh[1],
- bbox_xywh[1] + bbox_xywh[3],
- )
- plt.contour(u, self.levels, extent=extent, **self.plot_args)
- plt.contour(v, self.levels, extent=extent, **self.plot_args)
-
-
-class DensePoseResultsCustomContourVisualizer(DensePoseResultsVisualizer):
- """
- Contour visualization using marching squares
- """
-
- def __init__(self, levels=10, **kwargs):
- # TODO: colormap is hardcoded
- cmap = cv2.COLORMAP_PARULA
- if isinstance(levels, int):
- self.levels = np.linspace(0, 1, levels)
- else:
- self.levels = levels
- if "linewidths" in kwargs:
- self.linewidths = kwargs["linewidths"]
- else:
- self.linewidths = [1] * len(self.levels)
- self.plot_args = kwargs
- img_colors_bgr = cv2.applyColorMap((self.levels * 255).astype(np.uint8), cmap)
- self.level_colors_bgr = [
- [int(v) for v in img_color_bgr.ravel()] for img_color_bgr in img_colors_bgr
- ]
-
- def create_visualization_context(self, image_bgr: Image):
- return image_bgr
-
- def context_to_image_bgr(self, context):
- return context
-
- def get_image_bgr_from_context(self, context):
- return context
-
- def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: Boxes) -> Image:
- image_bgr = self.get_image_bgr_from_context(context)
- segm = _extract_i_from_iuvarr(iuv_arr)
- u = _extract_u_from_iuvarr(iuv_arr).astype(float) / 255.0
- v = _extract_v_from_iuvarr(iuv_arr).astype(float) / 255.0
- self._contours(image_bgr, u, segm, bbox_xywh)
- self._contours(image_bgr, v, segm, bbox_xywh)
-
- def _contours(self, image_bgr, arr, segm, bbox_xywh):
- for part_idx in range(1, DensePoseDataRelative.N_PART_LABELS + 1):
- mask = segm == part_idx
- if not np.any(mask):
- continue
- arr_min = np.amin(arr[mask])
- arr_max = np.amax(arr[mask])
- I, J = np.nonzero(mask)
- i0 = np.amin(I)
- i1 = np.amax(I) + 1
- j0 = np.amin(J)
- j1 = np.amax(J) + 1
- if (j1 == j0 + 1) or (i1 == i0 + 1):
- continue
- Nw = arr.shape[1] - 1
- Nh = arr.shape[0] - 1
- for level_idx, level in enumerate(self.levels):
- if (level < arr_min) or (level > arr_max):
- continue
- vp = arr[i0:i1, j0:j1] >= level
- bin_codes = vp[:-1, :-1] + vp[1:, :-1] * 2 + vp[1:, 1:] * 4 + vp[:-1, 1:] * 8
- mp = mask[i0:i1, j0:j1]
- bin_mask_codes = mp[:-1, :-1] + mp[1:, :-1] * 2 + mp[1:, 1:] * 4 + mp[:-1, 1:] * 8
- it = np.nditer(bin_codes, flags=["multi_index"])
- color_bgr = self.level_colors_bgr[level_idx]
- linewidth = self.linewidths[level_idx]
- while not it.finished:
- if (it[0] != 0) and (it[0] != 15):
- i, j = it.multi_index
- if bin_mask_codes[i, j] != 0:
- self._draw_line(
- image_bgr,
- arr,
- mask,
- level,
- color_bgr,
- linewidth,
- it[0],
- it.multi_index,
- bbox_xywh,
- Nw,
- Nh,
- (i0, j0),
- )
- it.iternext()
-
- def _draw_line(
- self,
- image_bgr,
- arr,
- mask,
- v,
- color_bgr,
- linewidth,
- bin_code,
- multi_idx,
- bbox_xywh,
- Nw,
- Nh,
- offset,
- ):
- lines = self._bin_code_2_lines(arr, v, bin_code, multi_idx, Nw, Nh, offset)
- x0, y0, w, h = bbox_xywh
- x1 = x0 + w
- y1 = y0 + h
- for line in lines:
- x0r, y0r = line[0]
- x1r, y1r = line[1]
- pt0 = (int(x0 + x0r * (x1 - x0)), int(y0 + y0r * (y1 - y0)))
- pt1 = (int(x0 + x1r * (x1 - x0)), int(y0 + y1r * (y1 - y0)))
- cv2.line(image_bgr, pt0, pt1, color_bgr, linewidth)
-
- def _bin_code_2_lines(self, arr, v, bin_code, multi_idx, Nw, Nh, offset):
- i0, j0 = offset
- i, j = multi_idx
- i += i0
- j += j0
- v0, v1, v2, v3 = arr[i, j], arr[i + 1, j], arr[i + 1, j + 1], arr[i, j + 1]
- x0i = float(j) / Nw
- y0j = float(i) / Nh
- He = 1.0 / Nh
- We = 1.0 / Nw
- if (bin_code == 1) or (bin_code == 14):
- a = (v - v0) / (v1 - v0)
- b = (v - v0) / (v3 - v0)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j)
- return [(pt1, pt2)]
- elif (bin_code == 2) or (bin_code == 13):
- a = (v - v0) / (v1 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 3) or (bin_code == 12):
- a = (v - v0) / (v3 - v0)
- b = (v - v1) / (v2 - v1)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + b * We, y0j + He)
- return [(pt1, pt2)]
- elif (bin_code == 4) or (bin_code == 11):
- a = (v - v1) / (v2 - v1)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j + He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 6) or (bin_code == 9):
- a = (v - v0) / (v1 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i, y0j + a * He)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif (bin_code == 7) or (bin_code == 8):
- a = (v - v0) / (v3 - v0)
- b = (v - v3) / (v2 - v3)
- pt1 = (x0i + a * We, y0j)
- pt2 = (x0i + We, y0j + b * He)
- return [(pt1, pt2)]
- elif bin_code == 5:
- a1 = (v - v0) / (v1 - v0)
- b1 = (v - v1) / (v2 - v1)
- pt11 = (x0i, y0j + a1 * He)
- pt12 = (x0i + b1 * We, y0j + He)
- a2 = (v - v0) / (v3 - v0)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- elif bin_code == 10:
- a1 = (v - v0) / (v3 - v0)
- b1 = (v - v0) / (v1 - v0)
- pt11 = (x0i + a1 * We, y0j)
- pt12 = (x0i, y0j + b1 * He)
- a2 = (v - v1) / (v2 - v1)
- b2 = (v - v3) / (v2 - v3)
- pt21 = (x0i + a2 * We, y0j + He)
- pt22 = (x0i + We, y0j + b2 * He)
- return [(pt11, pt12), (pt21, pt22)]
- return []
-
-
-try:
- import matplotlib
-
- matplotlib.use("Agg")
- DensePoseResultsContourVisualizer = DensePoseResultsMplContourVisualizer
-except ModuleNotFoundError:
- logger = logging.getLogger(__name__)
- logger.warning("Could not import matplotlib, using custom contour visualizer")
- DensePoseResultsContourVisualizer = DensePoseResultsCustomContourVisualizer
-
-
-class DensePoseResultsFineSegmentationVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsFineSegmentationVisualizer, self).__init__(
- _extract_i_from_iuvarr,
- _extract_i_from_iuvarr,
- inplace,
- cmap,
- alpha,
- val_scale=255.0 / DensePoseDataRelative.N_PART_LABELS,
- )
-
-
-class DensePoseResultsUVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsUVisualizer, self).__init__(
- _extract_u_from_iuvarr, _extract_i_from_iuvarr, inplace, cmap, alpha, val_scale=1.0
- )
-
-
-class DensePoseResultsVVisualizer(DensePoseMaskedColormapResultsVisualizer):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- super(DensePoseResultsVVisualizer, self).__init__(
- _extract_v_from_iuvarr, _extract_i_from_iuvarr, inplace, cmap, alpha, val_scale=1.0
- )
-
-
-class DensePoseOutputsFineSegmentationVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace,
- cmap=cmap,
- val_scale=255.0 / DensePoseDataRelative.N_PART_LABELS,
- alpha=alpha,
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(
- 0
- ), "densepose outputs S {} and I {}" " should have equal first dim size".format(
- S.size(), I.size()
- )
- assert N == U.size(
- 0
- ), "densepose outputs S {} and U {}" " should have equal first dim size".format(
- S.size(), U.size()
- )
- assert N == V.size(
- 0
- ), "densepose outputs S {} and V {}" " should have equal first dim size".format(
- S.size(), V.size()
- )
- assert N == len(
- bboxes_xywh
- ), "number of bounding boxes {}" " should be equal to first dim size of outputs {}".format(
- len(bboxes_xywh), N
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- matrix = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[matrix > 0] = 1
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh)
- return image_bgr
-
-
-class DensePoseOutputsUVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=1.0, alpha=alpha
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- assert isinstance(
- densepose_output, DensePoseOutput
- ), "DensePoseOutput expected, {} encountered".format(type(densepose_output))
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(
- 0
- ), "densepose outputs S {} and I {}" " should have equal first dim size".format(
- S.size(), I.size()
- )
- assert N == U.size(
- 0
- ), "densepose outputs S {} and U {}" " should have equal first dim size".format(
- S.size(), U.size()
- )
- assert N == V.size(
- 0
- ), "densepose outputs S {} and V {}" " should have equal first dim size".format(
- S.size(), V.size()
- )
- assert N == len(
- bboxes_xywh
- ), "number of bounding boxes {}" " should be equal to first dim size of outputs {}".format(
- len(bboxes_xywh), N
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- segmentation = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(segmentation.shape, dtype=np.uint8)
- mask[segmentation > 0] = 1
- Un = U[n].cpu().numpy().astype(np.float32)
- Uvis = np.zeros(segmentation.shape, dtype=np.float32)
- for partId in range(Un.shape[0]):
- Uvis[segmentation == partId] = Un[partId][segmentation == partId].clip(0, 1) * 255
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, Uvis, bbox_xywh)
- return image_bgr
-
-
-class DensePoseOutputsVVisualizer(object):
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace, cmap=cmap, val_scale=1.0, alpha=alpha
- )
-
- def visualize(
- self, image_bgr: Image, dp_output_with_bboxes: Optional[Tuple[DensePoseOutput, Boxes]]
- ) -> Image:
- if dp_output_with_bboxes is None:
- return image_bgr
- densepose_output, bboxes_xywh = dp_output_with_bboxes
- assert isinstance(
- densepose_output, DensePoseOutput
- ), "DensePoseOutput expected, {} encountered".format(type(densepose_output))
- S = densepose_output.S
- I = densepose_output.I # noqa
- U = densepose_output.U
- V = densepose_output.V
- N = S.size(0)
- assert N == I.size(
- 0
- ), "densepose outputs S {} and I {}" " should have equal first dim size".format(
- S.size(), I.size()
- )
- assert N == U.size(
- 0
- ), "densepose outputs S {} and U {}" " should have equal first dim size".format(
- S.size(), U.size()
- )
- assert N == V.size(
- 0
- ), "densepose outputs S {} and V {}" " should have equal first dim size".format(
- S.size(), V.size()
- )
- assert N == len(
- bboxes_xywh
- ), "number of bounding boxes {}" " should be equal to first dim size of outputs {}".format(
- len(bboxes_xywh), N
- )
- for n in range(N):
- Sn = S[n].argmax(dim=0)
- In = I[n].argmax(dim=0) * (Sn > 0).long()
- segmentation = In.cpu().numpy().astype(np.uint8)
- mask = np.zeros(segmentation.shape, dtype=np.uint8)
- mask[segmentation > 0] = 1
- Vn = V[n].cpu().numpy().astype(np.float32)
- Vvis = np.zeros(segmentation.shape, dtype=np.float32)
- for partId in range(Vn.size(0)):
- Vvis[segmentation == partId] = Vn[partId][segmentation == partId].clip(0, 1) * 255
- bbox_xywh = bboxes_xywh[n]
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, Vvis, bbox_xywh)
- return image_bgr
-
-
-class DensePoseDataCoarseSegmentationVisualizer(object):
- """
- Visualizer for ground truth segmentation
- """
-
- def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7):
- self.mask_visualizer = MatrixVisualizer(
- inplace=inplace,
- cmap=cmap,
- val_scale=255.0 / DensePoseDataRelative.N_BODY_PARTS,
- alpha=alpha,
- )
-
- def visualize(
- self,
- image_bgr: Image,
- bbox_densepose_datas: Optional[Tuple[Iterable[Boxes], Iterable[DensePoseDataRelative]]],
- ) -> Image:
- if bbox_densepose_datas is None:
- return image_bgr
- for bbox_xywh, densepose_data in zip(*bbox_densepose_datas):
- matrix = densepose_data.segm.numpy()
- mask = np.zeros(matrix.shape, dtype=np.uint8)
- mask[matrix > 0] = 1
- image_bgr = self.mask_visualizer.visualize(image_bgr, mask, matrix, bbox_xywh.numpy())
- return image_bgr
-
-
-class DensePoseDataPointsVisualizer(object):
- def __init__(self, densepose_data_to_value_fn=None, cmap=cv2.COLORMAP_PARULA):
- self.points_visualizer = PointsVisualizer()
- self.densepose_data_to_value_fn = densepose_data_to_value_fn
- self.cmap = cmap
-
- def visualize(
- self,
- image_bgr: Image,
- bbox_densepose_datas: Optional[Tuple[Iterable[Boxes], Iterable[DensePoseDataRelative]]],
- ) -> Image:
- if bbox_densepose_datas is None:
- return image_bgr
- for bbox_xywh, densepose_data in zip(*bbox_densepose_datas):
- x0, y0, w, h = bbox_xywh.numpy()
- x = densepose_data.x.numpy() * w / 255.0 + x0
- y = densepose_data.y.numpy() * h / 255.0 + y0
- pts_xy = zip(x, y)
- if self.densepose_data_to_value_fn is None:
- image_bgr = self.points_visualizer.visualize(image_bgr, pts_xy)
- else:
- v = self.densepose_data_to_value_fn(densepose_data)
- img_colors_bgr = cv2.applyColorMap(v, self.cmap)
- colors_bgr = [
- [int(v) for v in img_color_bgr.ravel()] for img_color_bgr in img_colors_bgr
- ]
- image_bgr = self.points_visualizer.visualize(image_bgr, pts_xy, colors_bgr)
- return image_bgr
-
-
-def _densepose_data_u_for_cmap(densepose_data):
- u = np.clip(densepose_data.u.numpy(), 0, 1) * 255.0
- return u.astype(np.uint8)
-
-
-def _densepose_data_v_for_cmap(densepose_data):
- v = np.clip(densepose_data.v.numpy(), 0, 1) * 255.0
- return v.astype(np.uint8)
-
-
-def _densepose_data_i_for_cmap(densepose_data):
- i = (
- np.clip(densepose_data.i.numpy(), 0.0, DensePoseDataRelative.N_PART_LABELS)
- * 255.0
- / DensePoseDataRelative.N_PART_LABELS
- )
- return i.astype(np.uint8)
-
-
-class DensePoseDataPointsUVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsUVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_u_for_cmap
- )
-
-
-class DensePoseDataPointsVVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsVVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_v_for_cmap
- )
-
-
-class DensePoseDataPointsIVisualizer(DensePoseDataPointsVisualizer):
- def __init__(self):
- super(DensePoseDataPointsIVisualizer, self).__init__(
- densepose_data_to_value_fn=_densepose_data_i_for_cmap
- )
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/data/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/data/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hdhzk/bingo/src/pages/api/blob.ts b/spaces/hdhzk/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/hdhzk/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/hgrif/rhyme-with-ai/rhyme_with_ai/rhyme_generator.py b/spaces/hgrif/rhyme-with-ai/rhyme_with_ai/rhyme_generator.py
deleted file mode 100644
index c2dea1605a9f3307cf556fe6e70605ca02b08988..0000000000000000000000000000000000000000
--- a/spaces/hgrif/rhyme-with-ai/rhyme_with_ai/rhyme_generator.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import logging
-from typing import List
-
-import numpy as np
-import tensorflow as tf
-from transformers import BertTokenizer, TFAutoModelForMaskedLM
-
-from rhyme_with_ai.token_weighter import TokenWeighter
-from rhyme_with_ai.utils import pairwise
-
-
-class RhymeGenerator:
- def __init__(
- self,
- model: TFAutoModelForMaskedLM,
- tokenizer: BertTokenizer,
- token_weighter: TokenWeighter = None,
- ):
- """Generate rhymes.
-
- Parameters
- ----------
- model : Model for masked language modelling
- tokenizer : Tokenizer for model
- token_weighter : Class that weighs tokens
- """
-
- self.model = model
- self.tokenizer = tokenizer
- if token_weighter is None:
- token_weighter = TokenWeighter(tokenizer)
- self.token_weighter = token_weighter
- self._logger = logging.getLogger(__name__)
-
- self.tokenized_rhymes_ = None
- self.position_probas_ = None
-
- # Easy access.
- self.comma_token_id = self.tokenizer.encode(",", add_special_tokens=False)[0]
- self.period_token_id = self.tokenizer.encode(".", add_special_tokens=False)[0]
- self.mask_token_id = self.tokenizer.mask_token_id
-
- def start(self, query: str, rhyme_words: List[str]) -> None:
- """Start the sentence generator.
-
- Parameters
- ----------
- query : Seed sentence
- rhyme_words : Rhyme words for next sentence
- """
- # TODO: What if no content?
- self._logger.info("Got sentence %s", query)
- tokenized_rhymes = [
- self._initialize_rhymes(query, rhyme_word) for rhyme_word in rhyme_words
- ]
- # Make same length.
- self.tokenized_rhymes_ = tf.keras.preprocessing.sequence.pad_sequences(
- tokenized_rhymes, padding="post", value=self.tokenizer.pad_token_id
- )
- p = self.tokenized_rhymes_ == self.tokenizer.mask_token_id
- self.position_probas_ = p / p.sum(1).reshape(-1, 1)
-
- def _initialize_rhymes(self, query: str, rhyme_word: str) -> List[int]:
- """Initialize the rhymes.
-
- * Tokenize input
- * Append a comma if the sentence does not end in it (might add better predictions as it
- shows the two sentence parts are related)
- * Make second line as long as the original
- * Add a period
-
- Parameters
- ----------
- query : First line
- rhyme_word : Last word for second line
-
- Returns
- -------
- Tokenized rhyme lines
- """
-
- query_token_ids = self.tokenizer.encode(query, add_special_tokens=False)
- rhyme_word_token_ids = self.tokenizer.encode(
- rhyme_word, add_special_tokens=False
- )
-
- if query_token_ids[-1] != self.comma_token_id:
- query_token_ids.append(self.comma_token_id)
-
- magic_correction = len(rhyme_word_token_ids) + 1 # 1 for comma
- return (
- query_token_ids
- + [self.tokenizer.mask_token_id] * (len(query_token_ids) - magic_correction)
- + rhyme_word_token_ids
- + [self.period_token_id]
- )
-
- def mutate(self):
- """Mutate the current rhymes.
-
- Returns
- -------
- Mutated rhymes
- """
- self.tokenized_rhymes_ = self._mutate(
- self.tokenized_rhymes_, self.position_probas_, self.token_weighter.proba
- )
-
- rhymes = []
- for i in range(len(self.tokenized_rhymes_)):
- rhymes.append(
- self.tokenizer.convert_tokens_to_string(
- self.tokenizer.convert_ids_to_tokens(
- self.tokenized_rhymes_[i], skip_special_tokens=True
- )
- )
- )
- return rhymes
-
- def _mutate(
- self,
- tokenized_rhymes: np.ndarray,
- position_probas: np.ndarray,
- token_id_probas: np.ndarray,
- ) -> np.ndarray:
-
- replacements = []
- for i in range(tokenized_rhymes.shape[0]):
- mask_idx, masked_token_ids = self._mask_token(
- tokenized_rhymes[i], position_probas[i]
- )
- tokenized_rhymes[i] = masked_token_ids
- replacements.append(mask_idx)
-
- predictions = self._predict_masked_tokens(tokenized_rhymes)
-
- for i, token_ids in enumerate(tokenized_rhymes):
- replace_ix = replacements[i]
- token_ids[replace_ix] = self._draw_replacement(
- predictions[i], token_id_probas, replace_ix
- )
- tokenized_rhymes[i] = token_ids
-
- return tokenized_rhymes
-
- def _mask_token(self, token_ids, position_probas):
- """Mask line and return index to update."""
- token_ids = self._mask_repeats(token_ids, position_probas)
- ix = self._locate_mask(token_ids, position_probas)
- token_ids[ix] = self.mask_token_id
- return ix, token_ids
-
- def _locate_mask(self, token_ids, position_probas):
- """Update masks or a random token."""
- if self.mask_token_id in token_ids:
- # Already masks present, just return the last.
- # We used to return thee first but this returns worse predictions.
- return np.where(token_ids == self.tokenizer.mask_token_id)[0][-1]
- return np.random.choice(range(len(position_probas)), p=position_probas)
-
- def _mask_repeats(self, token_ids, position_probas):
- """Repeated tokens are generally of less quality."""
- repeats = [
- ii for ii, ids in enumerate(pairwise(token_ids[:-2])) if ids[0] == ids[1]
- ]
- for ii in repeats:
- if position_probas[ii] > 0:
- token_ids[ii] = self.mask_token_id
- if position_probas[ii + 1] > 0:
- token_ids[ii + 1] = self.mask_token_id
- return token_ids
-
- def _predict_masked_tokens(self, tokenized_rhymes):
- return self.model(tf.constant(tokenized_rhymes))[0]
-
- def _draw_replacement(self, predictions, token_probas, replace_ix):
- """Get probability, weigh and draw."""
- # TODO (HG): Can't we softmax when calling the model?
- probas = tf.nn.softmax(predictions[replace_ix]).numpy() * token_probas
- probas /= probas.sum()
- return np.random.choice(range(len(probas)), p=probas)
-
-
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/distributed.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/distributed.py
deleted file mode 100644
index e2dcab5680f4d472774e4715fba896e0ff05e155..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/distributed.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import torch
-from torch import distributed
-from torch import autograd
-from torch.nn.parallel import DistributedDataParallel as DDP
-
-
-def print_if_rank0(*args):
- if distributed.get_rank() == 0:
- print(*args)
-
-
-class awesome_allgather_function(autograd.Function):
- @staticmethod
- def forward(ctx, input):
- world_size = distributed.get_world_size()
- # create a destination list for the allgather. I'm assuming you're gathering from 3 workers.
- allgather_list = [torch.empty_like(input) for _ in range(world_size)]
- #if distributed.get_rank() == 0:
- # import IPython;IPython.embed()
- distributed.all_gather(allgather_list, input)
- return torch.cat(allgather_list, dim=0)
-
- @staticmethod
- def backward(ctx, grad_output):
- #print_if_rank0("backward grad_output len", len(grad_output))
- #print_if_rank0("backward grad_output shape", grad_output.shape)
- grads_per_rank = grad_output.shape[0] // distributed.get_world_size()
- rank = distributed.get_rank()
- # We'll receive gradients for the entire catted forward output, so to mimic DataParallel,
- # return only the slice that corresponds to this process's input:
- sl = slice(rank * grads_per_rank, (rank + 1) * grads_per_rank)
- #print("worker", rank, "backward slice", sl)
- return grad_output[sl]
-
-
-if __name__ == "__main__":
- import torch.distributed as dist
- import argparse
- from torch import nn
- from torch.optim import Adam
-
- argumentparser = argparse.ArgumentParser()
- argumentparser.add_argument("--local_rank", type=int)
- args = argumentparser.parse_args()
-
- torch.cuda.set_device(args.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://')
-
- rnd = torch.rand((5, 2)).cuda()
-
- rnd_gathered = awesome_allgather_function.apply(rnd)
- print("gathering random tensors\nbefore\b", rnd, "\nafter\n", rnd_gathered)
-
- # so far this works as expected
- print("now running a DDP model")
- c = nn.Conv2d(2, 3, 3, 1, 1, 1, 1, True).cuda()
- c = DDP(c)
- opt = Adam(c.parameters())
-
- bs = 5
- if dist.get_rank() == 0:
- bs = 4
- inp = torch.rand((bs, 2, 5, 5)).cuda()
-
- out = c(inp)
- print("output_shape", out.shape)
-
- out_gathered = awesome_allgather_function.apply(out)
- print("output_shape_after_gather", out_gathered.shape)
- # this also works
-
- loss = out_gathered.sum()
- loss.backward()
- opt.step()
diff --git a/spaces/hu-po/speech2speech/app.py b/spaces/hu-po/speech2speech/app.py
deleted file mode 100644
index 874833cf07e029560e18ee60a8d1348f23d02516..0000000000000000000000000000000000000000
--- a/spaces/hu-po/speech2speech/app.py
+++ /dev/null
@@ -1,292 +0,0 @@
-import asyncio
-import logging
-import os
-import random
-from typing import Dict, List, Tuple
-
-import gradio as gr
-import yaml
-
-from src.elevenlabs import (Speaker, check_voice_exists, get_make_voice,
- play_history, save_history, set_elevenlabs_key)
-from src.openailib import top_response, speech_to_text, set_openai_key
-from src.tube import extract_audio
-
-logging.basicConfig(level=logging.INFO)
-log = logging.getLogger(__name__)
-
-
-class ConversationState:
- COLORS: list = ['#FFA07A', '#F08080', '#AFEEEE', '#B0E0E6', '#DDA0DD',
- '#FFFFE0', '#F0E68C', '#90EE90', '#87CEFA', '#FFB6C1']
- YAML_FILEPATH: str = os.path.join(os.path.dirname(__file__), 'voices.yaml')
- AUDIO_SAVEDIR: str = os.path.join(
- os.path.dirname(__file__), 'audio_export')
-
- def __init__(self,
- names: list = None,
- iam: str = None,
- model: str = "gpt-3.5-turbo",
- max_tokens: int = 30,
- temperature: float = 0.5,
- history: list = None):
- self.model = model
- self.max_tokens = max_tokens
- self.temperature = temperature
- # Make sure save dir exists, make any necessary directories
- os.makedirs(self.AUDIO_SAVEDIR, exist_ok=True)
- self.audio_savepath = os.path.join(
- self.AUDIO_SAVEDIR, 'conversation.wav')
- log.info(f"Resetting conversation")
- with open(self.YAML_FILEPATH, 'r') as file:
- self.characters_yaml = file.read()
- file.seek(0)
- self.characters_dict = yaml.safe_load(file)
- self.all_characters = [
- name for name in self.characters_dict.keys()]
- self.names = names or random.choices(self.all_characters, k=2)
- self.iam = iam or random.choice(self.names)
- assert self.iam in self.names, f"{self.iam} not in {self.names}"
- log.info(f"Loading voices")
- self.speakers: Dict[str, Speaker] = {}
- self.speakers_descriptions: str = ''
- for i, name in enumerate(self.names):
- if check_voice_exists(name) is None:
- log.warning(f"Voice {name} does not exist")
- continue
- _speaker = Speaker(
- name=name,
- voice=get_make_voice(name),
- color=self.COLORS[i % len(self.COLORS)],
- description=self.characters_dict[name].get(
- "description", None),
- )
- self.speakers[name] = _speaker
- if _speaker.description is not None:
- self.speakers_descriptions += f"{_speaker.name}: {_speaker.description}.\n"
- # System is fed into OpenAI to condition the prompt
- self.system = f"You create funny conversation dialogues."
- self.system += f"This conversation is between {', '.join(self.names)}."
- self.system += "Do not introduce new characters."
- self.system += "Descriptions for each of the characters are:\n"
- for speaker in self.speakers.values():
- self.system += f"{speaker.name}: {speaker.description}\n"
- self.system += "Only return one person's response at a time."
- self.system += "Each response must start with the character name, then a colon, then their response in a single line."
- self.system += "Keep the responses short and witty."
- self.system += "Make sure the responses are only one sentence long."
- self.system += "Do not continue a previous response. Always start a new response."
- # History is fed in at every step
- self.step = 0
- if history is None:
- self.history: List[Tuple[Speaker, str]] = []
-
- def add_to_history(self, text: str, speaker: Speaker = None):
- if speaker is None:
- speaker = self.speakers[self.iam]
- self.history.append((speaker, text))
-
- def history_to_prompt(self) -> str:
- prompt: str = ''
- for speaker, text in self.history:
- prompt += f"{speaker.name}:{text}\n"
- return prompt
-
- def html_history(self) -> str:
- history_html: str = ''
- for speaker, text in self.history:
- _bubble = f"{speaker.name}: {text}
"
- history_html += _bubble
- return history_html
-
-
-# Storing state in the global scope like this is bad, but
-# perfect is the enemy of good enough and gradio is kind of shit
-STATE = ConversationState()
-
-
-def reset(names, iam, model, max_tokens, temperature):
- # Push new global state to the global scope
- global STATE
- STATE = ConversationState(
- names=names,
- iam=iam,
- model=model,
- max_tokens=max_tokens,
- temperature=temperature,
- )
- return STATE.html_history()
-
-
-def step_mic(audio):
- global STATE
- try:
- request = speech_to_text(audio)
- STATE.add_to_history(request)
- except TypeError as e:
- log.warning(e)
- pass
- return STATE.html_history()
-
-
-def step_continue():
- global STATE
- response = top_response(STATE.history_to_prompt(),
- system=STATE.system,
- model=STATE.model,
- max_tokens=STATE.max_tokens,
- temperature=STATE.temperature,
- )
- for line in response.splitlines():
- try:
- # TODO: Add any filters here as assertion errors
- if not line:
- continue
- assert ":" in line, f"Line {line} does not have a colon"
- name, text = line.split(":")
- assert name in STATE.all_characters, f"Name {name} is not in {STATE.all_characters}"
- speaker = STATE.speakers[name]
- assert len(text) > 0, f"Text {text} is empty"
- STATE.add_to_history(text, speaker=speaker)
- except AssertionError as e:
- log.warning(e)
- continue
- return STATE.html_history()
-
-
-def save_audio():
- global STATE
- log.info(f"Saving audio")
- asyncio.run(save_history(STATE.history, STATE.audio_savepath))
- return STATE.audio_savepath
-
-
-def play_audio():
- global STATE
- log.info(f"Playing audio")
- asyncio.run(play_history(STATE.history))
- return STATE.html_history()
-
-
-def make_voices(voices_yaml: str):
- global STATE
- try:
- STATE.characters_dict = yaml.safe_load(voices_yaml)
- for name, metadata in STATE.characters_dict.items():
- videos = metadata['references']
- assert isinstance(name, str), f"Name {name} is not a string"
- assert isinstance(videos, list), f"Videos {videos} is not a list"
- if check_voice_exists(name):
- continue
- audio_paths = []
- for i, video in enumerate(videos):
- assert isinstance(video, Dict), f"Video {video} is not a dict"
- assert 'url' in video, f"Video {video} does not have a url"
- url = video['url']
- start_minute = video.get('start_minute', 0)
- duration = video.get('duration_seconds', 120)
- label = os.path.join(STATE.AUDIO_SAVEDIR, f"audio.{name}.{i}")
- output_path = extract_audio(url, label, start_minute, duration)
- audio_paths.append(output_path)
- get_make_voice(name, audio_paths)
- except Exception as e:
- raise e
- # return f"Error: {e}"
- return "Success"
-
-
-# Define the main GradIO UI
-with gr.Blocks() as demo:
- gr.HTML('''
-
- Speech2Speech
- Make a private copy of this space to paste your API keys.
-
-
- ''')
- with gr.Row():
- openai_api_key_textbox = gr.Textbox(
- placeholder="Paste your OpenAI API key here",
- show_label=False,
- lines=1,
- type="password",
- )
- elevenlabs_api_key_textbox = gr.Textbox(
- placeholder="Paste your ElevenLabs API key here",
- show_label=False,
- lines=1,
- type="password",
- )
- with gr.Tab("Conversation"):
- gr_convo_output = gr.HTML()
- with gr.Row():
- with gr.Column():
- gr_mic = gr.Audio(
- label="Record audio into conversation",
- source="microphone",
- type="filepath",
- )
- gr_add_button = gr.Button(value="Add to conversation")
- gr_playaudio_button = gr.Button(value="Play audio")
- gr_saveaudio_button = gr.Button(value="Export audio")
- gr_outputaudio = gr.Audio(
- label="Audio output",
- source="upload",
- type="filepath",
- )
- with gr.Column():
- gr_iam = gr.Dropdown(
- choices=STATE.all_characters, label="I am", value=STATE.iam)
- gr_chars = gr.CheckboxGroup(
- STATE.all_characters, label="Characters", value=STATE.names)
- gr_reset_button = gr.Button(value="Reset conversation")
- with gr.Accordion("Settings", open=False):
- openai_api_key_textbox = gr.Textbox(
- placeholder="Paste your OpenAI API key here",
- show_label=False,
- lines=1,
- type="password",
- )
- elevenlabs_api_key_textbox = gr.Textbox(
- placeholder="Paste your ElevenLabs API key here",
- show_label=False,
- lines=1,
- type="password",
- )
- gr_model = gr.Dropdown(choices=["gpt-3.5-turbo", "gpt-4"],
- label='GPT Model behind conversation', value=STATE.model)
- gr_max_tokens = gr.Slider(minimum=1, maximum=500, value=STATE.max_tokens,
- label="Max tokens", step=1)
- gr_temperature = gr.Slider(
- minimum=0.0, maximum=1.0, value=STATE.temperature, label="Temperature (randomness in conversation)")
- with gr.Tab("New Characters"):
- gr_make_voice_button = gr.Button(value="Update Characters")
- gr_voice_data = gr.Textbox(
- lines=25, label="Character YAML config", value=STATE.characters_yaml)
- gr_make_voice_output = gr.Textbox(
- lines=2, label="Character creation logs...")
-
- gr.HTML('''
- Created by Hu Po GitHub: speech2speech
-
- ''')
-
- # Buttons and actions
- gr_mic.change(step_mic, gr_mic, gr_convo_output)
- openai_api_key_textbox.change(set_openai_key, openai_api_key_textbox, None)
- elevenlabs_api_key_textbox.change(
- set_elevenlabs_key, elevenlabs_api_key_textbox, None)
- gr_add_button.click(step_continue, None, gr_convo_output)
- gr_reset_button.click(
- reset,
- inputs=[gr_chars, gr_iam, gr_model, gr_max_tokens, gr_temperature],
- outputs=[gr_convo_output],
- )
- gr_saveaudio_button.click(save_audio, None, gr_outputaudio)
- gr_playaudio_button.click(play_audio, None, None)
- gr_make_voice_button.click(
- make_voices, inputs=gr_voice_data, outputs=gr_make_voice_output,
- )
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/huggingchat/chat-ui/src/lib/types/WebSearch.ts b/spaces/huggingchat/chat-ui/src/lib/types/WebSearch.ts
deleted file mode 100644
index ad4ac7441440246c13a830846a6024fcc04834e8..0000000000000000000000000000000000000000
--- a/spaces/huggingchat/chat-ui/src/lib/types/WebSearch.ts
+++ /dev/null
@@ -1,45 +0,0 @@
-import type { ObjectId } from "mongodb";
-import type { Conversation } from "./Conversation";
-import type { Timestamps } from "./Timestamps";
-
-export interface WebSearch extends Timestamps {
- _id?: ObjectId;
- convId?: Conversation["_id"];
-
- prompt: string;
-
- searchQuery: string;
- results: WebSearchSource[];
- context: string;
- contextSources: WebSearchSource[];
-}
-
-export interface WebSearchSource {
- title: string;
- link: string;
- hostname: string;
- text?: string; // You.com provides text of webpage right away
-}
-
-export type WebSearchMessageSources = {
- type: "sources";
- sources: WebSearchSource[];
-};
-
-export interface YouWebSearch {
- hits: YouSearchHit[];
- latency: number;
-}
-
-interface YouSearchHit {
- url: string;
- title: string;
- description: string;
- snippets: string[];
-}
-
-// eslint-disable-next-line no-shadow
-export enum WebSearchProvider {
- GOOGLE = "Google",
- YOU = "You.com",
-}
diff --git a/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/hubert_model.py
deleted file mode 100644
index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000
--- a/spaces/hunger11243/VITS-Umamusume-voice-synthesizer/hubert_model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import copy
-from typing import Optional, Tuple
-import random
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-class Hubert(nn.Module):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
- def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- x, mask = self.encode(x)
- x = self.proj(x)
- logits = self.logits(x)
- return logits, mask
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- @torch.inference_mode()
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
-
-class FeatureExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = F.gelu(self.norm0(self.conv0(x)))
- x = F.gelu(self.conv1(x))
- x = F.gelu(self.conv2(x))
- x = F.gelu(self.conv3(x))
- x = F.gelu(self.conv4(x))
- x = F.gelu(self.conv5(x))
- x = F.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = F.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/hysts/ViTPose_video/videos/README.md b/spaces/hysts/ViTPose_video/videos/README.md
deleted file mode 100644
index ed08899ec09a378ba0a64214a5635555c3fead8a..0000000000000000000000000000000000000000
--- a/spaces/hysts/ViTPose_video/videos/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-These videos are from the following public domain:
-
-- https://www.pexels.com/video/young-guy-doing-break-dance-on-the-street-5362370/
-- https://www.pexels.com/video/a-woman-dancing-at-home-6003986/
-- https://www.pexels.com/video/long-haired-man-dancing-in-a-library-6344381/
-- https://www.pexels.com/video/a-female-model-dancing-around-6815069/
diff --git a/spaces/iakarshu/latr-vqa/app.py b/spaces/iakarshu/latr-vqa/app.py
deleted file mode 100644
index dd72896a89c2629ddda552e97bfaff1a0bd6dcd1..0000000000000000000000000000000000000000
--- a/spaces/iakarshu/latr-vqa/app.py
+++ /dev/null
@@ -1,148 +0,0 @@
-# Requirements.txt
-from torch import cuda
-from transformers import T5Tokenizer, T5ForConditionalGeneration
-import gradio as gr
-from utils import convert_ans_to_token, convert_ques_to_token, rotate, convert_token_to_ques, convert_token_to_answer
-from modeling import LaTr_for_pretraining, LaTr_for_finetuning, LaTrForVQA
-from dataset import load_json_file, get_specific_file, resize_align_bbox, get_tokens_with_boxes, create_features
-import torch.nn as nn
-from PIL import Image, ImageDraw
-import pytesseract
-from tqdm.auto import tqdm
-import numpy as np
-import json
-import os
-import torch
-from torchvision import transforms
-
-
-# install PyTesseract
-os.system('pip install -q pytesseract')
-os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
-
-# Default Library import
-# Visualization libraries
-
-# Specific libraries of LaTr
-
-# Setting the hyperparameters as well as primary configurations
-
-PAD_TOKEN_BOX = [0, 0, 0, 0]
-max_seq_len = 512
-batch_size = 2
-target_size = (500, 384)
-t5_model = "t5-base"
-
-
-device = 'cuda' if cuda.is_available() else 'cpu'
-
-
-# Configuration for the model
-config = {
- 't5_model': 't5-base',
- 'vocab_size': 32128,
- 'hidden_state': 768,
- 'max_2d_position_embeddings': 1001,
- 'classes': 32128, # number of tokens
- 'seq_len': 512
-}
-
-tokenizer = T5Tokenizer.from_pretrained(t5_model)
-latr = LaTrForVQA(config)
-url = 'https://www.kaggleusercontent.com/kf/99663112/eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..2HGa6jqeAbugMJYxSkh7eA.XkaLSf8XlITet17Bscupegw9zWLw-IEizSy1lM-_PJF_Gfj-YuinOpDw4ad0M8r-s3WlnclQhHYrd2seaZVjBmkm5WSE6Dae1fW54dnNhyWF5w5O2VafNar7QSuUTSRzacJcmtqI1ypL3OZofwXuETbXq4weeqfDptFS5luxuV0P4Vaer_xEgfsdld6v8O5jjMXwb1CVmPCjMdZUE-HTgzTDiwv3Lb-P3dkRgU7q-iI5GeYZCODYGrX-koxya9DlfzKQZXmJmvtMj45vUZ8OSRB0_hTc7UosQanA-SalWznnOuyOgwl4hMag5toTomriWsxfvJIRBn9CYgFcvUJNqO_kDzBUoAwnagjcxXeEIJTJglwAl9Rs37XyfJAZr7yQ_YTXeRW1j2QMsT_M3qtS96IKRTpsqPVibl8Vrs9Q5g_vKccIQR9t7R9ma_DZLwjWYhDvDO06AZqtdaYGfWaOrbqe8dDvJkZoHsZEO8ukpIH6YNLyCO_dqgRsE77I9jqxiUqQh1KnuNv2hGRSlQR7u8OF7lpiRS7JEwj2MaxlzD58dyhOOLDqrbLp7XWrgV79EQcRYHFSMfhDvG0zmGvHjWGAg-LGhnYIc0NMVhyRv5Pfta9WYEl4qXxCTZWe4olgV79WHLqksQMVyTteheB36n4biHZKx4KZj7k-j3aSI72DIAvj7_UFeHxUTTZ1c6MB.7BF6J5MPMuhQFU48xVZ2qQ/models/epoch=0-step=34602.ckpt'
-
-
-
-try:
- latr = latr.load_from_checkpoint(url)
- print("Checkpoint loaded successfully")
-except:
- print("Checkpoint not loaded")
- pass
-
-
-image = gr.inputs.Image(type="pil")
-question = gr.inputs.Textbox(label="Question")
-answer = gr.outputs.Textbox(label="Predicted answer")
-examples = [["remote.jpg", "what number is the button near the top left?"]]
-
-
-from transformers import ViTFeatureExtractor, ViTModel
-vit_feat_extract = ViTFeatureExtractor("google/vit-base-patch16-224-in21k")
-
-import torchvision
-import numpy as np
-
-def answer_question(image, question):
-
- # Extracting features from the image
- image.save("sample.png")
- img, boxes, tokenized_words = create_features("sample.png",
- tokenizer=tokenizer,
- target_size=target_size,
- max_seq_length=max_seq_len,
- use_ocr=True
- )
-
- ## Converting the boxes as per the format required for model input
- boxes = torch.as_tensor(boxes, dtype=torch.int32)
- width = (boxes[:, 2] - boxes[:, 0]).view(-1, 1)
- height = (boxes[:, 3] - boxes[:, 1]).view(-1, 1)
- boxes = torch.cat([boxes, width, height], axis = -1)
-
- ## Clamping the value,as some of the box values are out of bound
- boxes[:, 0] = torch.clamp(boxes[:, 0], min = 0, max = 0)
- boxes[:, 2] = torch.clamp(boxes[:, 2], min = 1000, max = 1000)
- boxes[:, 4] = torch.clamp(boxes[:, 4], min = 1000, max = 1000)
-
- boxes[:, 1] = torch.clamp(boxes[:, 1], min = 0, max = 0)
- boxes[:, 3] = torch.clamp(boxes[:, 3], min = 1000, max = 1000)
- boxes[:, 5] = torch.clamp(boxes[:, 5], min = 1000, max = 1000)
-
- ## Tensor tokenized words
- tokenized_words = torch.as_tensor(tokenized_words, dtype=torch.int32)
- img = np.array(img)
- img = torchvision.transforms.ToTensor()(img)
- question = convert_ques_to_token(question = question, tokenizer = tokenizer)
-
- ## Expanding the dimension for inference
- boxes = boxes.unsqueeze(0)
- tokenized_words = tokenized_words.unsqueeze(0)
- question = question.unsqueeze(0)
-
- # print("Shape of Image is:", img.shape)
- img = vit_feat_extract(img, return_tensors = 'pt')['pixel_values']
- if int(len(img.shape)) == 3:
- img = img.unsqueeze(0)
-
- encoding = {'img': img, 'boxes': boxes, 'tokenized_words': tokenized_words, 'question': question}
-
- with torch.no_grad():
- logits = latr.forward(encoding)
- logits = logits.squeeze(0)
-
- _, preds = torch.max(logits, dim = 1)
- preds = preds.detach().cpu()
- mask = torch.clamp(preds, min = 0, max = 1)
- last_non_zero_argument = (mask != 0).nonzero()[1][-1]
-
- predicted_ans = convert_token_to_ques(preds[:last_non_zero_argument], tokenizer)
- return predicted_ans
-
-
-# Taken from here: https://huggingface.co/spaces/nielsr/vilt-vqa/blob/main/app.py
-title = "Interactive demo: LaTr (Layout Aware Transformer) for VQA"
-description = "Gradio Demo for LaTr (Layout Aware Transformer),trained on TextVQA Dataset. To use it, simply upload your image and type a question and click 'submit', or click one of the examples to load them. Read more at the links below."
-article = "LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answering (STVQA) | Github Repo
"
-examples = [['remote.png', "Is remote present in the picture?"]]
-
-interface = gr.Interface(fn=answer_question,
- inputs=[image, question],
- outputs=answer,
- examples=examples,
- title=title,
- description=description,
- article=article,
- enable_queue=True)
-interface.launch(debug=True)
diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py b/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/iamironman4279/SadTalker/src/face3d/options/train_options.py b/spaces/iamironman4279/SadTalker/src/face3d/options/train_options.py
deleted file mode 100644
index 1337bfdd5f372b5c686a91b394a2aadbe5741f44..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/face3d/options/train_options.py
+++ /dev/null
@@ -1,53 +0,0 @@
-"""This script contains the training options for Deep3DFaceRecon_pytorch
-"""
-
-from .base_options import BaseOptions
-from util import util
-
-class TrainOptions(BaseOptions):
- """This class includes training options.
-
- It also includes shared options defined in BaseOptions.
- """
-
- def initialize(self, parser):
- parser = BaseOptions.initialize(self, parser)
- # dataset parameters
- # for train
- parser.add_argument('--data_root', type=str, default='./', help='dataset root')
- parser.add_argument('--flist', type=str, default='datalist/train/masks.txt', help='list of mask names of training set')
- parser.add_argument('--batch_size', type=int, default=32)
- parser.add_argument('--dataset_mode', type=str, default='flist', help='chooses how datasets are loaded. [None | flist]')
- parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly')
- parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data')
- parser.add_argument('--max_dataset_size', type=int, default=float("inf"), help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.')
- parser.add_argument('--preprocess', type=str, default='shift_scale_rot_flip', help='scaling and cropping of images at load time [shift_scale_rot_flip | shift_scale | shift | shift_rot_flip ]')
- parser.add_argument('--use_aug', type=util.str2bool, nargs='?', const=True, default=True, help='whether use data augmentation')
-
- # for val
- parser.add_argument('--flist_val', type=str, default='datalist/val/masks.txt', help='list of mask names of val set')
- parser.add_argument('--batch_size_val', type=int, default=32)
-
-
- # visualization parameters
- parser.add_argument('--display_freq', type=int, default=1000, help='frequency of showing training results on screen')
- parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console')
-
- # network saving and loading parameters
- parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results')
- parser.add_argument('--save_epoch_freq', type=int, default=1, help='frequency of saving checkpoints at the end of epochs')
- parser.add_argument('--evaluation_freq', type=int, default=5000, help='evaluation freq')
- parser.add_argument('--save_by_iter', action='store_true', help='whether saves model by iteration')
- parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model')
- parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by , +, ...')
- parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc')
- parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint')
-
- # training parameters
- parser.add_argument('--n_epochs', type=int, default=20, help='number of epochs with the initial learning rate')
- parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate for adam')
- parser.add_argument('--lr_policy', type=str, default='step', help='learning rate policy. [linear | step | plateau | cosine]')
- parser.add_argument('--lr_decay_epochs', type=int, default=10, help='multiply by a gamma every lr_decay_epochs epoches')
-
- self.isTrain = True
- return parser
diff --git a/spaces/iamironman4279/SadTalker/src/face3d/util/preprocess.py b/spaces/iamironman4279/SadTalker/src/face3d/util/preprocess.py
deleted file mode 100644
index b77a3a4058c208e5ba8cb1cfbb563954a5f7a3e2..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/face3d/util/preprocess.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""This script contains the image preprocessing code for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-from scipy.io import loadmat
-from PIL import Image
-import cv2
-import os
-from skimage import transform as trans
-import torch
-import warnings
-warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
-warnings.filterwarnings("ignore", category=FutureWarning)
-
-
-# calculating least square problem for image alignment
-def POS(xp, x):
- npts = xp.shape[1]
-
- A = np.zeros([2*npts, 8])
-
- A[0:2*npts-1:2, 0:3] = x.transpose()
- A[0:2*npts-1:2, 3] = 1
-
- A[1:2*npts:2, 4:7] = x.transpose()
- A[1:2*npts:2, 7] = 1
-
- b = np.reshape(xp.transpose(), [2*npts, 1])
-
- k, _, _, _ = np.linalg.lstsq(A, b)
-
- R1 = k[0:3]
- R2 = k[4:7]
- sTx = k[3]
- sTy = k[7]
- s = (np.linalg.norm(R1) + np.linalg.norm(R2))/2
- t = np.stack([sTx, sTy], axis=0)
-
- return t, s
-
-# resize and crop images for face reconstruction
-def resize_n_crop_img(img, lm, t, s, target_size=224., mask=None):
- w0, h0 = img.size
- w = (w0*s).astype(np.int32)
- h = (h0*s).astype(np.int32)
- left = (w/2 - target_size/2 + float((t[0] - w0/2)*s)).astype(np.int32)
- right = left + target_size
- up = (h/2 - target_size/2 + float((h0/2 - t[1])*s)).astype(np.int32)
- below = up + target_size
-
- img = img.resize((w, h), resample=Image.BICUBIC)
- img = img.crop((left, up, right, below))
-
- if mask is not None:
- mask = mask.resize((w, h), resample=Image.BICUBIC)
- mask = mask.crop((left, up, right, below))
-
- lm = np.stack([lm[:, 0] - t[0] + w0/2, lm[:, 1] -
- t[1] + h0/2], axis=1)*s
- lm = lm - np.reshape(
- np.array([(w/2 - target_size/2), (h/2-target_size/2)]), [1, 2])
-
- return img, lm, mask
-
-# utils for face reconstruction
-def extract_5p(lm):
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
- lm5p = np.stack([lm[lm_idx[0], :], np.mean(lm[lm_idx[[1, 2]], :], 0), np.mean(
- lm[lm_idx[[3, 4]], :], 0), lm[lm_idx[5], :], lm[lm_idx[6], :]], axis=0)
- lm5p = lm5p[[1, 2, 0, 3, 4], :]
- return lm5p
-
-# utils for face reconstruction
-def align_img(img, lm, lm3D, mask=None, target_size=224., rescale_factor=102.):
- """
- Return:
- transparams --numpy.array (raw_W, raw_H, scale, tx, ty)
- img_new --PIL.Image (target_size, target_size, 3)
- lm_new --numpy.array (68, 2), y direction is opposite to v direction
- mask_new --PIL.Image (target_size, target_size)
-
- Parameters:
- img --PIL.Image (raw_H, raw_W, 3)
- lm --numpy.array (68, 2), y direction is opposite to v direction
- lm3D --numpy.array (5, 3)
- mask --PIL.Image (raw_H, raw_W, 3)
- """
-
- w0, h0 = img.size
- if lm.shape[0] != 5:
- lm5p = extract_5p(lm)
- else:
- lm5p = lm
-
- # calculate translation and scale factors using 5 facial landmarks and standard landmarks of a 3D face
- t, s = POS(lm5p.transpose(), lm3D.transpose())
- s = rescale_factor/s
-
- # processing the image
- img_new, lm_new, mask_new = resize_n_crop_img(img, lm, t, s, target_size=target_size, mask=mask)
- trans_params = np.array([w0, h0, s, t[0], t[1]])
-
- return trans_params, img_new, lm_new, mask_new
diff --git a/spaces/iamironman4279/SadTalker/src/facerender/animate.py b/spaces/iamironman4279/SadTalker/src/facerender/animate.py
deleted file mode 100644
index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000
--- a/spaces/iamironman4279/SadTalker/src/facerender/animate.py
+++ /dev/null
@@ -1,257 +0,0 @@
-import os
-import cv2
-import yaml
-import numpy as np
-import warnings
-from skimage import img_as_ubyte
-import safetensors
-import safetensors.torch
-warnings.filterwarnings('ignore')
-
-
-import imageio
-import torch
-import torchvision
-
-
-from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector
-from src.facerender.modules.mapping import MappingNet
-from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator
-from src.facerender.modules.make_animation import make_animation
-
-from pydub import AudioSegment
-from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
-from src.utils.paste_pic import paste_pic
-from src.utils.videoio import save_video_with_watermark
-
-try:
- import webui # in webui
- in_webui = True
-except:
- in_webui = False
-
-class AnimateFromCoeff():
-
- def __init__(self, sadtalker_path, device):
-
- with open(sadtalker_path['facerender_yaml']) as f:
- config = yaml.safe_load(f)
-
- generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'],
- **config['model_params']['common_params'])
- kp_extractor = KPDetector(**config['model_params']['kp_detector_params'],
- **config['model_params']['common_params'])
- he_estimator = HEEstimator(**config['model_params']['he_estimator_params'],
- **config['model_params']['common_params'])
- mapping = MappingNet(**config['model_params']['mapping_params'])
-
- generator.to(device)
- kp_extractor.to(device)
- he_estimator.to(device)
- mapping.to(device)
- for param in generator.parameters():
- param.requires_grad = False
- for param in kp_extractor.parameters():
- param.requires_grad = False
- for param in he_estimator.parameters():
- param.requires_grad = False
- for param in mapping.parameters():
- param.requires_grad = False
-
- if sadtalker_path is not None:
- if 'checkpoint' in sadtalker_path: # use safe tensor
- self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None)
- else:
- self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- if sadtalker_path['mappingnet_checkpoint'] is not None:
- self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.he_estimator = he_estimator
- self.mapping = mapping
-
- self.kp_extractor.eval()
- self.generator.eval()
- self.he_estimator.eval()
- self.mapping.eval()
-
- self.device = device
-
- def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
- kp_detector=None, he_estimator=None,
- device="cpu"):
-
- checkpoint = safetensors.torch.load_file(checkpoint_path)
-
- if generator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'generator' in k:
- x_generator[k.replace('generator.', '')] = v
- generator.load_state_dict(x_generator)
- if kp_detector is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'kp_extractor' in k:
- x_generator[k.replace('kp_extractor.', '')] = v
- kp_detector.load_state_dict(x_generator)
- if he_estimator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'he_estimator' in k:
- x_generator[k.replace('he_estimator.', '')] = v
- he_estimator.load_state_dict(x_generator)
-
- return None
-
- def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
- kp_detector=None, he_estimator=None, optimizer_generator=None,
- optimizer_discriminator=None, optimizer_kp_detector=None,
- optimizer_he_estimator=None, device="cpu"):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if generator is not None:
- generator.load_state_dict(checkpoint['generator'])
- if kp_detector is not None:
- kp_detector.load_state_dict(checkpoint['kp_detector'])
- if he_estimator is not None:
- he_estimator.load_state_dict(checkpoint['he_estimator'])
- if discriminator is not None:
- try:
- discriminator.load_state_dict(checkpoint['discriminator'])
- except:
- print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
- if optimizer_generator is not None:
- optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
- if optimizer_discriminator is not None:
- try:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
- except RuntimeError as e:
- print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
- if optimizer_kp_detector is not None:
- optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
- if optimizer_he_estimator is not None:
- optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
-
- return checkpoint['epoch']
-
- def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
- optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if mapping is not None:
- mapping.load_state_dict(checkpoint['mapping'])
- if discriminator is not None:
- discriminator.load_state_dict(checkpoint['discriminator'])
- if optimizer_mapping is not None:
- optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
- if optimizer_discriminator is not None:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
-
- return checkpoint['epoch']
-
- def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
-
- source_image=x['source_image'].type(torch.FloatTensor)
- source_semantics=x['source_semantics'].type(torch.FloatTensor)
- target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
- source_image=source_image.to(self.device)
- source_semantics=source_semantics.to(self.device)
- target_semantics=target_semantics.to(self.device)
- if 'yaw_c_seq' in x:
- yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
- yaw_c_seq = x['yaw_c_seq'].to(self.device)
- else:
- yaw_c_seq = None
- if 'pitch_c_seq' in x:
- pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
- pitch_c_seq = x['pitch_c_seq'].to(self.device)
- else:
- pitch_c_seq = None
- if 'roll_c_seq' in x:
- roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
- roll_c_seq = x['roll_c_seq'].to(self.device)
- else:
- roll_c_seq = None
-
- frame_num = x['frame_num']
-
- predictions_video = make_animation(source_image, source_semantics, target_semantics,
- self.generator, self.kp_extractor, self.he_estimator, self.mapping,
- yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
-
- predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:])
- predictions_video = predictions_video[:frame_num]
-
- video = []
- for idx in range(predictions_video.shape[0]):
- image = predictions_video[idx]
- image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32)
- video.append(image)
- result = img_as_ubyte(video)
-
- ### the generated video is 256x256, so we keep the aspect ratio,
- original_size = crop_info[0]
- if original_size:
- result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ]
-
- video_name = x['video_name'] + '.mp4'
- path = os.path.join(video_save_dir, 'temp_'+video_name)
-
- imageio.mimsave(path, result, fps=float(25))
-
- av_path = os.path.join(video_save_dir, video_name)
- return_path = av_path
-
- audio_path = x['audio_path']
- audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0]
- new_audio_path = os.path.join(video_save_dir, audio_name+'.wav')
- start_time = 0
- # cog will not keep the .mp3 filename
- sound = AudioSegment.from_file(audio_path)
- frames = frame_num
- end_time = start_time + frames*1/25*1000
- word1=sound.set_frame_rate(16000)
- word = word1[start_time:end_time]
- word.export(new_audio_path, format="wav")
-
- save_video_with_watermark(path, new_audio_path, av_path, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name}')
-
- if 'full' in preprocess.lower():
- # only add watermark to the full image.
- video_name_full = x['video_name'] + '_full.mp4'
- full_video_path = os.path.join(video_save_dir, video_name_full)
- return_path = full_video_path
- paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False)
- print(f'The generated video is named {video_save_dir}/{video_name_full}')
- else:
- full_video_path = av_path
-
- #### paste back then enhancers
- if enhancer:
- video_name_enhancer = x['video_name'] + '_enhanced.mp4'
- enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer)
- av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer)
- return_path = av_path_enhancer
-
- try:
- enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
- except:
- enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
-
- save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name_enhancer}')
- os.remove(enhanced_path)
-
- os.remove(path)
- os.remove(new_audio_path)
-
- return return_path
-
diff --git a/spaces/innocent-charles/Swahili-Question-Answer-App/README.md b/spaces/innocent-charles/Swahili-Question-Answer-App/README.md
deleted file mode 100644
index 52f43c9405153e145666ac5435890853dfe0f572..0000000000000000000000000000000000000000
--- a/spaces/innocent-charles/Swahili-Question-Answer-App/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Swahili Question Answer App
-emoji: 💻
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/inreVtussa/clothingai/Examples/Colin Mcrae Dirt 2 Keygen Serial.md b/spaces/inreVtussa/clothingai/Examples/Colin Mcrae Dirt 2 Keygen Serial.md
deleted file mode 100644
index 63b7cba56d771a9c76ec48aecca8e05cb2d43593..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Colin Mcrae Dirt 2 Keygen Serial.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-download my cheat engine,colinmcrae_dirt2_cheats,colinmcrae_dirt2_cheat,colinmcrae_dirt2_cheat_codes,colinmcrae_dirt2_serial, which is very easy to use, just type the title and press enter. it will provide all the cheats, codes and all what ever you want. i hope you enjoy it!
-Colin Mcrae Dirt 2 Keygen Serial Download File ✶ https://tiurll.com/2uCipd
-this is the 100% working colin mcrae dirt 2 cheat serial unlock all levels unlock all vehicles free all extras free all weapons complete cars complete all levels complete all weapons complete all extras complete all vehicles
-to get this cheat you have to download my cheat engine,colinmcrae_dirt2_cheats,colinmcrae_dirt2_cheat,colinmcrae_dirt2_cheat_codes,colinmcrae_dirt2_serial, which is very easy to use, just type the title and press enter. it will provide all the cheats, codes and all what ever you want. i hope you enjoy it!
-a good racing game needs a good game engine, and colin mcrae dirt 2 is no exception. this game is fast, thrilling and addictive. it’s a good companion for the new version of colin mcrae dirt 2: freedom, which is also available. now get started and enjoy colin mcrae dirt 2!
-
-colin mcrae dirt 2 offers a total of 3 different game modes, including single race, time trial and multiplayer. single race is a race to the finish, time trial is a race against the clock and multiplayer is a race against your friends.
-the game also has multiple game types, including arcade, drag racing, dirt trax and street. arcade game is a race to the finish and dirt trax game is a race against the clock. street game is similar to time trial.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/ismot/1702t1/loss/object_loss.py b/spaces/ismot/1702t1/loss/object_loss.py
deleted file mode 100644
index eda7c1c15feed4586e6262326ce06ece98f885ef..0000000000000000000000000000000000000000
--- a/spaces/ismot/1702t1/loss/object_loss.py
+++ /dev/null
@@ -1,42 +0,0 @@
-"""
-@Date: 2021/08/12
-@description:
-"""
-import torch
-import torch.nn as nn
-from loss.grad_loss import GradLoss
-
-
-class ObjectLoss(nn.Module):
- def __init__(self):
- super().__init__()
- self.heat_map_loss = HeatmapLoss(reduction='mean') # FocalLoss(reduction='mean')
- self.l1_loss = nn.SmoothL1Loss()
-
- def forward(self, gt, dt):
- # TODO::
- return 0
-
-
-class HeatmapLoss(nn.Module):
- def __init__(self, weight=None, alpha=2, beta=4, reduction='mean'):
- super(HeatmapLoss, self).__init__()
- self.alpha = alpha
- self.beta = beta
- self.reduction = reduction
-
- def forward(self, targets, inputs):
- center_id = (targets == 1.0).float()
- other_id = (targets != 1.0).float()
- center_loss = -center_id * (1.0 - inputs) ** self.alpha * torch.log(inputs + 1e-14)
- other_loss = -other_id * (1 - targets) ** self.beta * inputs ** self.alpha * torch.log(1.0 - inputs + 1e-14)
- loss = center_loss + other_loss
-
- batch_size = loss.size(0)
- if self.reduction == 'mean':
- loss = torch.sum(loss) / batch_size
-
- if self.reduction == 'sum':
- loss = torch.sum(loss) / batch_size
-
- return loss
diff --git a/spaces/jbilcke-hf/MusicGen/tests/modules/test_transformer.py b/spaces/jbilcke-hf/MusicGen/tests/modules/test_transformer.py
deleted file mode 100644
index ff7dfe4c2de05112aec55ddea9c8fd978668f80b..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/MusicGen/tests/modules/test_transformer.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.transformer import (
- StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend)
-
-
-def test_transformer_causal_streaming():
- torch.manual_seed(1234)
-
- for context, custom in product([None, 10], [False, True]):
- # Test that causality and receptive fields are properly handled.
- # looking at the gradients
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=custom,
- dropout=0.)
- steps = 20
- for k in [0, 10, 15, 19]:
- x = torch.randn(4, steps, 16, requires_grad=True)
- y = tr(x)
- y[:, k].abs().sum().backward()
- if k + 1 < steps:
- assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm()
- assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm()
- if context is not None and k > context:
- limit = k - context - 1
- assert torch.allclose(x.grad[:, :limit],
- torch.tensor(0.)), x.grad[:, :limit].norm()
-
- # Now check that streaming gives the same result at batch eval.
- x = torch.randn(4, steps, 16)
- y = tr(x)
- ys = []
- with tr.streaming():
- for k in range(steps):
- chunk = x[:, k:k + 1, :]
- ys.append(tr(chunk))
- y_stream = torch.cat(ys, dim=1)
- delta = torch.norm(y_stream - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_transformer_vs_pytorch():
- torch.manual_seed(1234)
- # Check that in the non causal setting, we get the same result as
- # PyTorch Transformer encoder.
- for custom in [False, True]:
- tr = StreamingTransformer(
- 16, 4, 2,
- causal=False, custom=custom, dropout=0., positional_scale=0.)
- layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True)
- tr_ref = torch.nn.TransformerEncoder(layer, 2)
- tr.load_state_dict(tr_ref.state_dict())
-
- x = torch.randn(4, 20, 16)
- y = tr(x)
- y2 = tr_ref(x)
- delta = torch.norm(y2 - y) / torch.norm(y)
- assert delta < 1e-6, delta
-
-
-def test_streaming_api():
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.)
- tr.eval()
- steps = 12
- x = torch.randn(1, steps, 16)
-
- with torch.no_grad():
- with tr.streaming():
- _ = tr(x[:, :1])
- state = {k: v.clone() for k, v in tr.get_streaming_state().items()}
- y = tr(x[:, 1:2])
- tr.set_streaming_state(state)
- y2 = tr(x[:, 1:2])
- assert torch.allclose(y, y2), (y - y2).norm()
- assert tr.flush() is None
-
-
-def test_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1)
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- assert torch.allclose(y, y2), ((y - y2).norm(), backend)
-
-
-def test_attention_as_float32():
- torch.manual_seed(1234)
- cases = [
- {'custom': True},
- {'custom': False},
- ]
- for case in cases:
- tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case)
- tr_float32 = StreamingTransformer(
- 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case)
- if not case['custom']:
- # we are not using autocast here because it doesn't really
- # work as expected on CPU, so we have to manually cast the weights of the MHA.
- for layer in tr_float32.layers:
- layer.self_attn.mha.to(torch.float32)
- tr_float32.load_state_dict(tr.state_dict())
- steps = 12
- x = torch.randn(3, steps, 16, dtype=torch.bfloat16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_float32(x)
- assert not torch.allclose(y, y2), (y - y2).norm()
-
-
-@torch.no_grad()
-def test_streaming_memory_efficient():
- for backend in ['torch', 'xformers']:
- torch.manual_seed(1234)
- set_efficient_attention_backend(backend)
- tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True)
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, causal=True)
- tr.load_state_dict(tr_mem_efficient.state_dict())
- tr.eval()
- tr_mem_efficient.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr_mem_efficient.streaming():
- outs = []
- # frame_sizes = [2] + [1] * (steps - 2)
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr_mem_efficient(frame))
-
- out = torch.cat(outs, dim=1)
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_cross_attention():
- torch.manual_seed(1234)
- for norm_first in [True, False]:
- m = StreamingTransformer(
- 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True)
- m_cross = StreamingTransformer(
- 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True)
- m_cross.load_state_dict(m.state_dict(), strict=False)
- x = torch.randn(2, 5, 16)
- cross_x = torch.randn(2, 3, 16)
- y_ref = m(x)
- y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x)
- # With norm_first, the two should be exactly yhe same,
- # but with norm_first=False, we get 2 normalization in a row
- # and the epsilon value leads to a tiny change.
- atol = 0. if norm_first else 1e-6
- print((y_ref - y_cross_zero).norm() / y_ref.norm())
- assert torch.allclose(y_ref, y_cross_zero, atol=atol)
-
- # We now expect a difference even with a generous atol of 1e-2.
- y_cross = m_cross(x, cross_attention_src=cross_x)
- assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2)
-
- with pytest.raises(AssertionError):
- _ = m_cross(x)
- _ = m(x, cross_attention_src=cross_x)
-
-
-def test_cross_attention_compat():
- torch.manual_seed(1234)
- num_heads = 2
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True)
-
- cross_attn = StreamingMultiheadAttention(
- dim, num_heads, dropout=0, cross_attention=True, custom=True)
- ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True)
-
- # We can load the regular attention state dict
- # so we have compat when loading old checkpoints.
- cross_attn.load_state_dict(ref_attn.state_dict())
-
- queries = torch.randn(3, 7, dim)
- keys = torch.randn(3, 9, dim)
- values = torch.randn(3, 9, dim)
-
- y = cross_attn(queries, keys, values)[0]
- y_ref = ref_attn(queries, keys, values)[0]
- assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm()
-
- # Now let's check that streaming is working properly.
- with cross_attn.streaming():
- ys = []
- for step in range(queries.shape[1]):
- ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0])
- y_streaming = torch.cat(ys, dim=1)
- assert torch.allclose(y_streaming, y, atol=1e-7)
-
-
-def test_repeat_kv():
- torch.manual_seed(1234)
- num_heads = 8
- kv_repeat = 4
- dim = num_heads * 64
- with pytest.raises(AssertionError):
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat)
- mha = StreamingMultiheadAttention(
- dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True)
- x = torch.randn(4, 18, dim)
- y = mha(x, x, x)[0]
- assert x.shape == y.shape
-
-
-def test_qk_layer_norm():
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False)
- steps = 12
- x = torch.randn(3, steps, 16)
- y = tr(x)
-
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True)
- z = torch.randn(3, 21, 16)
- y = tr(x, cross_attention_src=z)
- assert y.shape == x.shape
diff --git a/spaces/jbilcke-hf/ai-comic-factory/CONTRIBUTORS.md b/spaces/jbilcke-hf/ai-comic-factory/CONTRIBUTORS.md
deleted file mode 100644
index 421a1fb031dc63e7e10e590abed81e0ecbf1c21b..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/ai-comic-factory/CONTRIBUTORS.md
+++ /dev/null
@@ -1,10 +0,0 @@
-This project was developed by Julian Bilcke (@jbilcke-hf), as part of his work at Hugging Face.
-
-------------------------------------------
-
-A huge thanks to external developers for their contributions!
-
-艾逗笔 (@idoubi):
-- [feature] Added support for OpenAI: https://github.com/jbilcke-hf/ai-comic-factory/pull/6
-- [bug] predict import error (use dynamic imports for the LLM provider): https://github.com/jbilcke-hf/ai-comic-factory/pull/9
-
diff --git a/spaces/jeang/ernie_demo_toy/ernie/__init__.py b/spaces/jeang/ernie_demo_toy/ernie/__init__.py
deleted file mode 100644
index 54db11e0be9cfbc121a12e32abd2a0483410a483..0000000000000000000000000000000000000000
--- a/spaces/jeang/ernie_demo_toy/ernie/__init__.py
+++ /dev/null
@@ -1,47 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-from .ernie import * # noqa: F401, F403
-from tensorflow.python.client import device_lib
-import logging
-
-__version__ = '1.0.1'
-
-logging.getLogger().setLevel(logging.WARNING)
-logging.getLogger("transformers.tokenization_utils").setLevel(logging.ERROR)
-logging.basicConfig(
- format='%(asctime)-15s [%(levelname)s] %(message)s',
- datefmt='%Y-%m-%d %H:%M:%S'
-)
-
-
-def _get_cpu_name():
- import cpuinfo
- cpu_info = cpuinfo.get_cpu_info()
- cpu_name = f"{cpu_info['brand_raw']}, {cpu_info['count']} vCores"
- return cpu_name
-
-
-def _get_gpu_name():
- gpu_name = \
- device_lib\
- .list_local_devices()[3]\
- .physical_device_desc\
- .split(',')[1]\
- .split('name:')[1]\
- .strip()
- return gpu_name
-
-
-device_name = _get_cpu_name()
-device_type = 'CPU'
-
-try:
- device_name = _get_gpu_name()
- device_type = 'GPU'
-except IndexError:
- # Detect TPU
- pass
-
-logging.info(f'ernie v{__version__}')
-logging.info(f'target device: [{device_type}] {device_name}\n')
diff --git a/spaces/jeevankumar-s/stabilityai-stable-diffusion-xl-base-1.0/README.md b/spaces/jeevankumar-s/stabilityai-stable-diffusion-xl-base-1.0/README.md
deleted file mode 100644
index 5cfc8a6d097bc6d1456a37e79cfe6d8b031cf97a..0000000000000000000000000000000000000000
--- a/spaces/jeevankumar-s/stabilityai-stable-diffusion-xl-base-1.0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion Xl Base 1.0
-emoji: 🏆
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/streams/text.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/streams/text.py
deleted file mode 100644
index bba2d3f7dfffa3bdbf921bdad4ca7143be97c2fd..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/streams/text.py
+++ /dev/null
@@ -1,143 +0,0 @@
-from __future__ import annotations
-
-import codecs
-from dataclasses import InitVar, dataclass, field
-from typing import Any, Callable, Mapping
-
-from ..abc import (
- AnyByteReceiveStream,
- AnyByteSendStream,
- AnyByteStream,
- ObjectReceiveStream,
- ObjectSendStream,
- ObjectStream,
-)
-
-
-@dataclass(eq=False)
-class TextReceiveStream(ObjectReceiveStream[str]):
- """
- Stream wrapper that decodes bytes to strings using the given encoding.
-
- Decoding is done using :class:`~codecs.IncrementalDecoder` which returns any completely
- received unicode characters as soon as they come in.
-
- :param transport_stream: any bytes-based receive stream
- :param encoding: character encoding to use for decoding bytes to strings (defaults to
- ``utf-8``)
- :param errors: handling scheme for decoding errors (defaults to ``strict``; see the
- `codecs module documentation`_ for a comprehensive list of options)
-
- .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects
- """
-
- transport_stream: AnyByteReceiveStream
- encoding: InitVar[str] = "utf-8"
- errors: InitVar[str] = "strict"
- _decoder: codecs.IncrementalDecoder = field(init=False)
-
- def __post_init__(self, encoding: str, errors: str) -> None:
- decoder_class = codecs.getincrementaldecoder(encoding)
- self._decoder = decoder_class(errors=errors)
-
- async def receive(self) -> str:
- while True:
- chunk = await self.transport_stream.receive()
- decoded = self._decoder.decode(chunk)
- if decoded:
- return decoded
-
- async def aclose(self) -> None:
- await self.transport_stream.aclose()
- self._decoder.reset()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return self.transport_stream.extra_attributes
-
-
-@dataclass(eq=False)
-class TextSendStream(ObjectSendStream[str]):
- """
- Sends strings to the wrapped stream as bytes using the given encoding.
-
- :param AnyByteSendStream transport_stream: any bytes-based send stream
- :param str encoding: character encoding to use for encoding strings to bytes (defaults to
- ``utf-8``)
- :param str errors: handling scheme for encoding errors (defaults to ``strict``; see the
- `codecs module documentation`_ for a comprehensive list of options)
-
- .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects
- """
-
- transport_stream: AnyByteSendStream
- encoding: InitVar[str] = "utf-8"
- errors: str = "strict"
- _encoder: Callable[..., tuple[bytes, int]] = field(init=False)
-
- def __post_init__(self, encoding: str) -> None:
- self._encoder = codecs.getencoder(encoding)
-
- async def send(self, item: str) -> None:
- encoded = self._encoder(item, self.errors)[0]
- await self.transport_stream.send(encoded)
-
- async def aclose(self) -> None:
- await self.transport_stream.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return self.transport_stream.extra_attributes
-
-
-@dataclass(eq=False)
-class TextStream(ObjectStream[str]):
- """
- A bidirectional stream that decodes bytes to strings on receive and encodes strings to bytes on
- send.
-
- Extra attributes will be provided from both streams, with the receive stream providing the
- values in case of a conflict.
-
- :param AnyByteStream transport_stream: any bytes-based stream
- :param str encoding: character encoding to use for encoding/decoding strings to/from bytes
- (defaults to ``utf-8``)
- :param str errors: handling scheme for encoding errors (defaults to ``strict``; see the
- `codecs module documentation`_ for a comprehensive list of options)
-
- .. _codecs module documentation: https://docs.python.org/3/library/codecs.html#codec-objects
- """
-
- transport_stream: AnyByteStream
- encoding: InitVar[str] = "utf-8"
- errors: InitVar[str] = "strict"
- _receive_stream: TextReceiveStream = field(init=False)
- _send_stream: TextSendStream = field(init=False)
-
- def __post_init__(self, encoding: str, errors: str) -> None:
- self._receive_stream = TextReceiveStream(
- self.transport_stream, encoding=encoding, errors=errors
- )
- self._send_stream = TextSendStream(
- self.transport_stream, encoding=encoding, errors=errors
- )
-
- async def receive(self) -> str:
- return await self._receive_stream.receive()
-
- async def send(self, item: str) -> None:
- await self._send_stream.send(item)
-
- async def send_eof(self) -> None:
- await self.transport_stream.send_eof()
-
- async def aclose(self) -> None:
- await self._send_stream.aclose()
- await self._receive_stream.aclose()
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- return {
- **self._send_stream.extra_attributes,
- **self._receive_stream.extra_attributes,
- }
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NSAP_PTR.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NSAP_PTR.py
deleted file mode 100644
index 0a18fdceb4ce34d30ba55113d6017ba375bc93c7..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/NSAP_PTR.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
-
-# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
-#
-# Permission to use, copy, modify, and distribute this software and its
-# documentation for any purpose with or without fee is hereby granted,
-# provided that the above copyright notice and this permission notice
-# appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
-# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
-# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
-# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-import dns.immutable
-import dns.rdtypes.nsbase
-
-
-@dns.immutable.immutable
-class NSAP_PTR(dns.rdtypes.nsbase.UncompressedNS):
-
- """NSAP-PTR record"""
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/t2CharStringPen.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/t2CharStringPen.py
deleted file mode 100644
index 41ab0f92f2b683ac2dc87ca1b16f54047d0fef81..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/pens/t2CharStringPen.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright (c) 2009 Type Supply LLC
-# Author: Tal Leming
-
-from fontTools.misc.roundTools import otRound, roundFunc
-from fontTools.misc.psCharStrings import T2CharString
-from fontTools.pens.basePen import BasePen
-from fontTools.cffLib.specializer import specializeCommands, commandsToProgram
-
-
-class T2CharStringPen(BasePen):
- """Pen to draw Type 2 CharStrings.
-
- The 'roundTolerance' argument controls the rounding of point coordinates.
- It is defined as the maximum absolute difference between the original
- float and the rounded integer value.
- The default tolerance of 0.5 means that all floats are rounded to integer;
- a value of 0 disables rounding; values in between will only round floats
- which are close to their integral part within the tolerated range.
- """
-
- def __init__(self, width, glyphSet, roundTolerance=0.5, CFF2=False):
- super(T2CharStringPen, self).__init__(glyphSet)
- self.round = roundFunc(roundTolerance)
- self._CFF2 = CFF2
- self._width = width
- self._commands = []
- self._p0 = (0, 0)
-
- def _p(self, pt):
- p0 = self._p0
- pt = self._p0 = (self.round(pt[0]), self.round(pt[1]))
- return [pt[0] - p0[0], pt[1] - p0[1]]
-
- def _moveTo(self, pt):
- self._commands.append(("rmoveto", self._p(pt)))
-
- def _lineTo(self, pt):
- self._commands.append(("rlineto", self._p(pt)))
-
- def _curveToOne(self, pt1, pt2, pt3):
- _p = self._p
- self._commands.append(("rrcurveto", _p(pt1) + _p(pt2) + _p(pt3)))
-
- def _closePath(self):
- pass
-
- def _endPath(self):
- pass
-
- def getCharString(self, private=None, globalSubrs=None, optimize=True):
- commands = self._commands
- if optimize:
- maxstack = 48 if not self._CFF2 else 513
- commands = specializeCommands(
- commands, generalizeFirst=False, maxstack=maxstack
- )
- program = commandsToProgram(commands)
- if self._width is not None:
- assert (
- not self._CFF2
- ), "CFF2 does not allow encoding glyph width in CharString."
- program.insert(0, otRound(self._width))
- if not self._CFF2:
- program.append("endchar")
- charString = T2CharString(
- program=program, private=private, globalSubrs=globalSubrs
- )
- return charString
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/svgLib/path/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/svgLib/path/__init__.py
deleted file mode 100644
index 742bc64ce037a53a765efc80ed773b840af5b4c7..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/svgLib/path/__init__.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from fontTools.pens.transformPen import TransformPen
-from fontTools.misc import etree
-from fontTools.misc.textTools import tostr
-from .parser import parse_path
-from .shapes import PathBuilder
-
-
-__all__ = [tostr(s) for s in ("SVGPath", "parse_path")]
-
-
-class SVGPath(object):
- """Parse SVG ``path`` elements from a file or string, and draw them
- onto a glyph object that supports the FontTools Pen protocol.
-
- For example, reading from an SVG file and drawing to a Defcon Glyph:
-
- import defcon
- glyph = defcon.Glyph()
- pen = glyph.getPen()
- svg = SVGPath("path/to/a.svg")
- svg.draw(pen)
-
- Or reading from a string containing SVG data, using the alternative
- 'fromstring' (a class method):
-
- data = '>> tree = client.get_tree("owner", "repo", "branch")
- >>> tree.sha
- """
-
- @dataclass
- class GitTreeObject(DataClassJsonMixin):
- """
- Dataclass for the objects in the tree.
-
- Attributes:
- - path (str): Path to the object.
- - mode (str): Mode of the object.
- - type (str): Type of the object.
- - sha (str): SHA1 checksum ID of the object.
- - url (str): URL for the object.
- - size (Optional[int]): Size of the object (only for blobs).
- """
-
- path: str
- mode: str
- type: str
- sha: str
- url: str
- size: Optional[int] = None
-
- sha: str
- url: str
- tree: List[GitTreeObject]
- truncated: bool
-
-
-@dataclass
-class GitBlobResponseModel(DataClassJsonMixin):
- """
- Dataclass for the response from the Github API's getBlob endpoint.
-
- Attributes:
- - content (str): Content of the blob.
- - encoding (str): Encoding of the blob.
- - url (str): URL for the blob.
- - sha (str): SHA1 checksum ID of the blob.
- - size (int): Size of the blob.
- - node_id (str): Node ID of the blob.
- """
-
- content: str
- encoding: str
- url: str
- sha: str
- size: int
- node_id: str
-
-
-@dataclass
-class GitCommitResponseModel(DataClassJsonMixin):
- """
- Dataclass for the response from the Github API's getCommit endpoint.
-
- Attributes:
- - tree (Tree): Tree object for the commit.
- """
-
- @dataclass
- class Commit(DataClassJsonMixin):
- """Dataclass for the commit object in the commit. (commit.commit)."""
-
- @dataclass
- class Tree(DataClassJsonMixin):
- """
- Dataclass for the tree object in the commit.
-
- Attributes:
- - sha (str): SHA for the commit
- """
-
- sha: str
-
- tree: Tree
-
- commit: Commit
-
-
-@dataclass
-class GitBranchResponseModel(DataClassJsonMixin):
- """
- Dataclass for the response from the Github API's getBranch endpoint.
-
- Attributes:
- - commit (Commit): Commit object for the branch.
- """
-
- @dataclass
- class Commit(DataClassJsonMixin):
- """Dataclass for the commit object in the branch. (commit.commit)."""
-
- @dataclass
- class Commit(DataClassJsonMixin):
- """Dataclass for the commit object in the commit. (commit.commit.tree)."""
-
- @dataclass
- class Tree(DataClassJsonMixin):
- """
- Dataclass for the tree object in the commit.
-
- Usage: commit.commit.tree.sha
- """
-
- sha: str
-
- tree: Tree
-
- commit: Commit
-
- commit: Commit
-
-
-class GithubClient:
- """
- An asynchronous client for interacting with the Github API.
-
- This client is used for making API requests to Github.
- It provides methods for accessing the Github API endpoints.
- The client requires a Github token for authentication,
- which can be passed as an argument or set as an environment variable.
- If no Github token is provided, the client will raise a ValueError.
-
- Examples:
- >>> client = GithubClient("my_github_token")
- >>> branch_info = client.get_branch("owner", "repo", "branch")
- """
-
- DEFAULT_BASE_URL = "https://api.github.com"
- DEFAULT_API_VERSION = "2022-11-28"
-
- def __init__(
- self,
- github_token: Optional[str] = None,
- base_url: str = DEFAULT_BASE_URL,
- api_version: str = DEFAULT_API_VERSION,
- verbose: bool = False,
- ) -> None:
- """
- Initialize the GithubClient.
-
- Args:
- - github_token (str): Github token for authentication.
- If not provided, the client will try to get it from
- the GITHUB_TOKEN environment variable.
- - base_url (str): Base URL for the Github API
- (defaults to "https://api.github.com").
- - api_version (str): Github API version (defaults to "2022-11-28").
-
- Raises:
- ValueError: If no Github token is provided.
- """
- if github_token is None:
- github_token = os.getenv("GITHUB_TOKEN")
- if github_token is None:
- raise ValueError(
- "Please provide a Github token. "
- + "You can do so by passing it as an argument to the GithubReader,"
- + "or by setting the GITHUB_TOKEN environment variable."
- )
-
- self._base_url = base_url
- self._api_version = api_version
- self._verbose = verbose
-
- self._endpoints = {
- "getTree": "/repos/{owner}/{repo}/git/trees/{tree_sha}",
- "getBranch": "/repos/{owner}/{repo}/branches/{branch}",
- "getBlob": "/repos/{owner}/{repo}/git/blobs/{file_sha}",
- "getCommit": "/repos/{owner}/{repo}/commits/{commit_sha}",
- }
-
- self._headers = {
- "Accept": "application/vnd.github+json",
- "Authorization": f"Bearer {github_token}",
- "X-GitHub-Api-Version": f"{self._api_version}",
- }
-
- def get_all_endpoints(self) -> Dict[str, str]:
- """Get all available endpoints."""
- return {**self._endpoints}
-
- async def request(
- self,
- endpoint: str,
- method: str,
- headers: Dict[str, Any] = {},
- **kwargs: Any,
- ) -> Any:
- """
- Make an API request to the Github API.
-
- This method is used for making API requests to the Github API.
- It is used internally by the other methods in the client.
-
- Args:
- - `endpoint (str)`: Name of the endpoint to make the request to.
- - `method (str)`: HTTP method to use for the request.
- - `headers (dict)`: HTTP headers to include in the request.
- - `**kwargs`: Keyword arguments to pass to the endpoint URL.
-
- Returns:
- - `response (httpx.Response)`: Response from the API request.
-
- Raises:
- - ImportError: If the `httpx` library is not installed.
- - httpx.HTTPError: If the API request fails.
-
- Examples:
- >>> response = client.request("getTree", "GET",
- owner="owner", repo="repo",
- tree_sha="tree_sha")
- """
- try:
- import httpx
- except ImportError:
- raise ImportError(
- "Please install httpx to use the GithubRepositoryReader. "
- "You can do so by running `pip install httpx`."
- )
-
- _headers = {**self._headers, **headers}
-
- _client: httpx.AsyncClient
- async with httpx.AsyncClient(
- headers=_headers, base_url=self._base_url
- ) as _client:
- try:
- response = await _client.request(
- method, url=self._endpoints[endpoint].format(**kwargs)
- )
- except httpx.HTTPError as excp:
- print(f"HTTP Exception for {excp.request.url} - {excp}")
- raise excp
- return response
-
- async def get_branch(
- self, owner: str, repo: str, branch: str
- ) -> GitBranchResponseModel:
- """
- Get information about a branch. (Github API endpoint: getBranch).
-
- Args:
- - `owner (str)`: Owner of the repository.
- - `repo (str)`: Name of the repository.
- - `branch (str)`: Name of the branch.
-
- Returns:
- - `branch_info (GitBranchResponseModel)`: Information about the branch.
-
- Examples:
- >>> branch_info = client.get_branch("owner", "repo", "branch")
- """
- return GitBranchResponseModel.from_json(
- (
- await self.request(
- "getBranch", "GET", owner=owner, repo=repo, branch=branch
- )
- ).text
- )
-
- async def get_tree(
- self, owner: str, repo: str, tree_sha: str
- ) -> GitTreeResponseModel:
- """
- Get information about a tree. (Github API endpoint: getTree).
-
- Args:
- - `owner (str)`: Owner of the repository.
- - `repo (str)`: Name of the repository.
- - `tree_sha (str)`: SHA of the tree.
-
- Returns:
- - `tree_info (GitTreeResponseModel)`: Information about the tree.
-
- Examples:
- >>> tree_info = client.get_tree("owner", "repo", "tree_sha")
- """
- return GitTreeResponseModel.from_json(
- (
- await self.request(
- "getTree", "GET", owner=owner, repo=repo, tree_sha=tree_sha
- )
- ).text
- )
-
- async def get_blob(
- self, owner: str, repo: str, file_sha: str
- ) -> GitBlobResponseModel:
- """
- Get information about a blob. (Github API endpoint: getBlob).
-
- Args:
- - `owner (str)`: Owner of the repository.
- - `repo (str)`: Name of the repository.
- - `file_sha (str)`: SHA of the file.
-
- Returns:
- - `blob_info (GitBlobResponseModel)`: Information about the blob.
-
- Examples:
- >>> blob_info = client.get_blob("owner", "repo", "file_sha")
- """
- return GitBlobResponseModel.from_json(
- (
- await self.request(
- "getBlob", "GET", owner=owner, repo=repo, file_sha=file_sha
- )
- ).text
- )
-
- async def get_commit(
- self, owner: str, repo: str, commit_sha: str
- ) -> GitCommitResponseModel:
- """
- Get information about a commit. (Github API endpoint: getCommit).
-
- Args:
- - `owner (str)`: Owner of the repository.
- - `repo (str)`: Name of the repository.
- - `commit_sha (str)`: SHA of the commit.
-
- Returns:
- - `commit_info (GitCommitResponseModel)`: Information about the commit.
-
- Examples:
- >>> commit_info = client.get_commit("owner", "repo", "commit_sha")
- """
- return GitCommitResponseModel.from_json(
- (
- await self.request(
- "getCommit", "GET", owner=owner, repo=repo, commit_sha=commit_sha
- )
- ).text
- )
-
-
-if __name__ == "__main__":
- import asyncio
-
- async def main() -> None:
- """Test the GithubClient."""
- client = GithubClient()
- response = await client.get_tree(
- owner="ahmetkca", repo="CommitAI", tree_sha="with-body"
- )
-
- for obj in response.tree:
- if obj.type == "blob":
- print(obj.path)
- print(obj.sha)
- blob_response = await client.get_blob(
- owner="ahmetkca", repo="CommitAI", file_sha=obj.sha
- )
- print(blob_response.content)
-
- asyncio.run(main())
diff --git a/spaces/joeli88/astrologer/README.md b/spaces/joeli88/astrologer/README.md
deleted file mode 100644
index aeee9693cca145ae556b0230dfafd629ad210013..0000000000000000000000000000000000000000
--- a/spaces/joeli88/astrologer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Astrologer
-emoji: 🐢
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/julien-c/push-model-from-web/dist/app.js b/spaces/julien-c/push-model-from-web/dist/app.js
deleted file mode 100644
index e4728283306e23fac5bee60cf72b8a5f3467d1cc..0000000000000000000000000000000000000000
--- a/spaces/julien-c/push-model-from-web/dist/app.js
+++ /dev/null
@@ -1,83 +0,0 @@
-import { createRepo, commit } from "@huggingface/hub";
-const c = console;
-const ENDPOINT = "https://huggingface.co";
-async function whoami(token) {
- const path = `${ENDPOINT}/api/whoami-v2`;
- const res = await fetch(path, {
- headers: {
- Authorization: `Bearer ${token}`,
- }
- });
- return await res.json();
-}
-const FILES_TO_UPLOAD = [
- "./mobilenet/model.json",
- "./mobilenet/group1-shard1of2",
- "./mobilenet/group1-shard2of2",
- "./mobilenet/coffee.jpg",
- "./mobilenet/README.md",
-];
-function filenameFromURL(url) {
- return url.substring(url.lastIndexOf("/") + 1);
-}
-window.addEventListener("load", function () {
- const tokenEl = document.querySelector("#token");
- const repoNameEl = document.querySelector("#repo_name");
- const button = document.querySelector("#submit");
- const output = document.querySelector("#logs");
- const storedToken = window.localStorage.getItem("hf_token");
- if (storedToken) {
- tokenEl.value = storedToken;
- /// ^to help in dev.
- }
- repoNameEl.value = `tfjs-mobilenet-${Date.now() % 1_000}`;
- /// "random" repo name
- button.addEventListener("click", async function () {
- const token = tokenEl.value;
- const repoName = repoNameEl.value;
- if (!token || !repoName) {
- alert("You need a token and a repo name");
- return;
- }
- button.setAttribute("disabled", "disabled");
- try {
- const { name: username } = await whoami(token);
- const name = `${username}/${repoName}`;
- await createRepo({
- repo: {
- type: "model",
- name,
- },
- credentials: {
- accessToken: token,
- }
- });
- const operations = await Promise.all(FILES_TO_UPLOAD.map(async (file) => {
- return {
- operation: "addOrUpdate",
- path: filenameFromURL(file),
- content: await (await fetch(file)).blob(),
- };
- }));
- const commitOutput = await commit({
- repo: {
- type: "model",
- name,
- },
- credentials: {
- accessToken: token,
- },
- title: "upload model",
- operations,
- });
- c.log(commitOutput);
- const fullUrl = `${ENDPOINT}/${name}`;
- /// ^TODO(get it from the createRepo call)
- button.insertAdjacentHTML("afterend", `🎉 Upload complete! Model page is
${fullUrl} `);
- }
- catch (err) {
- output.append("\n" + err);
- }
- button.removeAttribute("disabled");
- });
-});
diff --git a/spaces/kenton-li/ChatArxiv/src/optimizeOpenAI.py b/spaces/kenton-li/ChatArxiv/src/optimizeOpenAI.py
deleted file mode 100644
index 9035d94efbfb4b3a8a9e3a3e57079df06d3aab88..0000000000000000000000000000000000000000
--- a/spaces/kenton-li/ChatArxiv/src/optimizeOpenAI.py
+++ /dev/null
@@ -1,233 +0,0 @@
-"""
-A simple wrapper for the official ChatGPT API
-"""
-import json
-import os
-import threading
-import time
-import requests
-import tiktoken
-from typing import Generator
-from queue import PriorityQueue as PQ
-import json
-import os
-import time
-
-class chatPaper:
- """
- Official ChatGPT API
- """
- def __init__(
- self,
- api_keys: list,
- proxy = None,
- api_proxy = None,
- max_tokens: int = 4000,
- temperature: float = 0.5,
- top_p: float = 1.0,
- model_name: str = "gpt-3.5-turbo",
- reply_count: int = 1,
- system_prompt = "You are ChatArxiv, A paper reading bot",
- lastAPICallTime = time.time()-100,
- apiTimeInterval = 20,
- ) -> None:
- self.model_name = model_name
- self.system_prompt = system_prompt
- self.apiTimeInterval = apiTimeInterval
- self.session = requests.Session()
- self.api_keys = PQ()
- for key in api_keys:
- self.api_keys.put((lastAPICallTime,key))
- self.proxy = proxy
- if self.proxy:
- proxies = {
- "http": self.proxy,
- "https": self.proxy,
- }
- self.session.proxies = proxies
- self.max_tokens = max_tokens
- self.temperature = temperature
- self.top_p = top_p
- self.reply_count = reply_count
- self.decrease_step = 250
- self.conversation = {}
- self.ENCODER = tiktoken.get_encoding("gpt2")
- if self.token_str(self.system_prompt) > self.max_tokens:
- raise Exception("System prompt is too long")
- self.lock = threading.Lock()
-
- def get_api_key(self):
- with self.lock:
- apiKey = self.api_keys.get()
- delay = self._calculate_delay(apiKey)
- time.sleep(delay)
- self.api_keys.put((time.time(), apiKey[1]))
- return apiKey[1]
-
- def _calculate_delay(self, apiKey):
- elapsed_time = time.time() - apiKey[0]
- if elapsed_time < self.apiTimeInterval:
- return self.apiTimeInterval - elapsed_time
- else:
- return 0
-
- def add_to_conversation(self, message: str, role: str, convo_id: str = "default"):
- if(convo_id not in self.conversation):
- self.reset(convo_id)
- self.conversation[convo_id].append({"role": role, "content": message})
-
- def __truncate_conversation(self, convo_id: str = "default"):
- """
- Truncate the conversation
- """
- last_dialog = self.conversation[convo_id][-1]
- query = str(last_dialog['content'])
- if(len(self.ENCODER.encode(str(query)))>self.max_tokens):
- query = query[:int(1.5*self.max_tokens)]
- while(len(self.ENCODER.encode(str(query)))>self.max_tokens):
- query = query[:self.decrease_step]
- self.conversation[convo_id] = self.conversation[convo_id][:-1]
- full_conversation = "\n".join([str(x["content"]) for x in self.conversation[convo_id]],)
- if len(self.ENCODER.encode(full_conversation)) > self.max_tokens:
- self.conversation_summary(convo_id=convo_id)
- full_conversation = ""
- for x in self.conversation[convo_id]:
- full_conversation = str(x["content"]) + "\n" + full_conversation
- while True:
- if (len(self.ENCODER.encode(full_conversation+query)) > self.max_tokens):
- query = query[:self.decrease_step]
- else:
- break
- last_dialog['content'] = str(query)
- self.conversation[convo_id].append(last_dialog)
-
- def ask_stream(
- self,
- prompt: str,
- role: str = "user",
- convo_id: str = "default",
- **kwargs,
- ) -> Generator:
- if convo_id not in self.conversation.keys():
- self.reset(convo_id=convo_id)
- self.add_to_conversation(prompt, "user", convo_id=convo_id)
- self.__truncate_conversation(convo_id=convo_id)
- apiKey = self.get_api_key()
- response = self.session.post(
- "https://api.openai.com/v1/chat/completions",
- headers={"Authorization": f"Bearer {kwargs.get('api_key', apiKey)}"},
- json={
- "model": self.model_name,
- "messages": self.conversation[convo_id],
- "stream": True,
- # kwargs
- "temperature": kwargs.get("temperature", self.temperature),
- "top_p": kwargs.get("top_p", self.top_p),
- "n": kwargs.get("n", self.reply_count),
- "user": role,
- },
- stream=True,
- )
- if response.status_code != 200:
- raise Exception(
- f"Error: {response.status_code} {response.reason} {response.text}",
- )
- for line in response.iter_lines():
- if not line:
- continue
- # Remove "data: "
- line = line.decode("utf-8")[6:]
- if line == "[DONE]":
- break
- resp: dict = json.loads(line)
- choices = resp.get("choices")
- if not choices:
- continue
- delta = choices[0].get("delta")
- if not delta:
- continue
- if "content" in delta:
- content = delta["content"]
- yield content
-
- def ask(self, prompt: str, role: str = "user", convo_id: str = "default", **kwargs):
- """
- Non-streaming ask
- """
- response = self.ask_stream(
- prompt=prompt,
- role=role,
- convo_id=convo_id,
- **kwargs,
- )
- full_response: str = "".join(response)
- self.add_to_conversation(full_response, role, convo_id=convo_id)
- usage_token = self.token_str(prompt)
- com_token = self.token_str(full_response)
- total_token = self.token_cost(convo_id=convo_id)
- return full_response, usage_token, com_token, total_token
-
- def check_api_available(self):
- response = self.session.post(
- "https://api.openai.com/v1/chat/completions",
- headers={"Authorization": f"Bearer {self.get_api_key()}"},
- json={
- "model": self.engine,
- "messages": [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "print A"}],
- "stream": True,
- # kwargs
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.reply_count,
- "user": "user",
- },
- stream=True,
- )
- if response.status_code == 200:
- return True
- else:
- return False
-
- def reset(self, convo_id: str = "default", system_prompt = None):
- """
- Reset the conversation
- """
- self.conversation[convo_id] = [
- {"role": "system", "content": str(system_prompt or self.system_prompt)},
- ]
-
- def conversation_summary(self, convo_id: str = "default"):
- input = ""
- role = ""
- for conv in self.conversation[convo_id]:
- if (conv["role"]=='user'):
- role = 'User'
- else:
- role = 'ChatGpt'
- input+=role+' : '+conv['content']+'\n'
- prompt = "Your goal is to summarize the provided conversation. Your summary should be concise and focus on the key information to facilitate better dialogue for the large language model.Ensure that you include all necessary details and relevant information while still reducing the length of the conversation as much as possible. Your summary should be clear and easily understandable for the ChatGpt model providing a comprehensive and concise summary of the conversation."
- if(self.token_str(str(input)+prompt)>self.max_tokens):
- input = input[self.token_str(str(input))-self.max_tokens:]
- while self.token_str(str(input)+prompt)>self.max_tokens:
- input = input[self.decrease_step:]
- prompt = prompt.replace("{conversation}", input)
- self.reset(convo_id='conversationSummary')
- response = self.ask(prompt, convo_id='conversationSummary')
- while self.token_str(str(response))>self.max_tokens:
- response = response[:-self.decrease_step]
- self.reset(convo_id='conversationSummary',system_prompt='Summariaze')
- self.conversation[convo_id] = [
- {"role": "system", "content": self.system_prompt},
- {"role": "user", "content": "Summariaze"},
- {"role": 'assistant', "content": response},
- ]
- return self.conversation[convo_id]
-
- def token_cost(self,convo_id: str = "default"):
- return len(self.ENCODER.encode("\n".join([x["content"] for x in self.conversation[convo_id]])))
-
- def token_str(self, content:str):
- return len(self.ENCODER.encode(content))
-
-def main():
- return
diff --git a/spaces/kepl/gpt/client/js/change-language.js b/spaces/kepl/gpt/client/js/change-language.js
deleted file mode 100644
index ce87f6f60c7a9acca5e1902612930ef677f3fb65..0000000000000000000000000000000000000000
--- a/spaces/kepl/gpt/client/js/change-language.js
+++ /dev/null
@@ -1,47 +0,0 @@
-document.addEventListener('DOMContentLoaded', fetchLanguages);
-
-async function fetchLanguages() {
- try {
- const [languagesResponse, currentLanguageResponse] = await Promise.all([
- fetch(`${url_prefix}/get-languages`),
- fetch(`${url_prefix}/get-locale`)
- ]);
-
- const languages = await languagesResponse.json();
- const currentLanguage = await currentLanguageResponse.text();
-
- const languageSelect = document.getElementById('language');
- languages.forEach(lang => {
- const option = document.createElement('option');
- option.value = lang;
- option.textContent = lang;
- languageSelect.appendChild(option);
- });
-
- const savedLanguage = localStorage.getItem("language") || currentLanguage;
- setLanguageOnPageLoad(savedLanguage);
- } catch (error) {
- console.error("Failed to fetch languages or current language");
- }
-}
-
-function setLanguageOnPageLoad(language) {
- document.getElementById("language").value = language;
-}
-
-function changeLanguage(lang) {
- fetch(`${url_prefix}/change-language`, {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- },
- body: JSON.stringify({ language: lang }),
- }).then((response) => {
- if (response.ok) {
- localStorage.setItem("language", lang);
- location.reload();
- } else {
- console.error("Failed to change language");
- }
- });
-}
diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/util/parseinput.py b/spaces/kevinwang676/Bark-Voice-Cloning/util/parseinput.py
deleted file mode 100644
index f2102648cf169f0a52bb66755308fee5f81247e0..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/Bark-Voice-Cloning/util/parseinput.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import re
-import xml.etree.ElementTree as ET
-from xml.sax import saxutils
-#import nltk
-
-# Chunked generation originally from https://github.com/serp-ai/bark-with-voice-clone
-def split_and_recombine_text(text, desired_length=100, max_length=150):
- # return nltk.sent_tokenize(text)
-
- # from https://github.com/neonbjb/tortoise-tts
- """Split text it into chunks of a desired length trying to keep sentences intact."""
- # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii
- text = re.sub(r"\n\n+", "\n", text)
- text = re.sub(r"\s+", " ", text)
- text = re.sub(r"[“”]", '"', text)
-
- rv = []
- in_quote = False
- current = ""
- split_pos = []
- pos = -1
- end_pos = len(text) - 1
-
- def seek(delta):
- nonlocal pos, in_quote, current
- is_neg = delta < 0
- for _ in range(abs(delta)):
- if is_neg:
- pos -= 1
- current = current[:-1]
- else:
- pos += 1
- current += text[pos]
- if text[pos] == '"':
- in_quote = not in_quote
- return text[pos]
-
- def peek(delta):
- p = pos + delta
- return text[p] if p < end_pos and p >= 0 else ""
-
- def commit():
- nonlocal rv, current, split_pos
- rv.append(current)
- current = ""
- split_pos = []
-
- while pos < end_pos:
- c = seek(1)
- # do we need to force a split?
- if len(current) >= max_length:
- if len(split_pos) > 0 and len(current) > (desired_length / 2):
- # we have at least one sentence and we are over half the desired length, seek back to the last split
- d = pos - split_pos[-1]
- seek(-d)
- else:
- # no full sentences, seek back until we are not in the middle of a word and split there
- while c not in "!?.,\n " and pos > 0 and len(current) > desired_length:
- c = seek(-1)
- commit()
- # check for sentence boundaries
- elif not in_quote and (c in "!?]\n" or (c == "." and peek(1) in "\n ")):
- # seek forward if we have consecutive boundary markers but still within the max length
- while (
- pos < len(text) - 1 and len(current) < max_length and peek(1) in "!?.]"
- ):
- c = seek(1)
- split_pos.append(pos)
- if len(current) >= desired_length:
- commit()
- # treat end of quote as a boundary if its followed by a space or newline
- elif in_quote and peek(1) == '"' and peek(2) in "\n ":
- seek(2)
- split_pos.append(pos)
- rv.append(current)
-
- # clean up, remove lines with only whitespace or punctuation
- rv = [s.strip() for s in rv]
- rv = [s for s in rv if len(s) > 0 and not re.match(r"^[\s\.,;:!?]*$", s)]
-
- return rv
-
-def is_ssml(value):
- try:
- ET.fromstring(value)
- except ET.ParseError:
- return False
- return True
-
-def build_ssml(rawtext, selected_voice):
- texts = rawtext.split("\n")
- joinedparts = ""
- for textpart in texts:
- textpart = textpart.strip()
- if len(textpart) < 1:
- continue
- joinedparts = joinedparts + f"\n{saxutils.escape(textpart)} "
- ssml = f"""
-
- {joinedparts}
-
- """
- return ssml
-
-def create_clips_from_ssml(ssmlinput):
- # Parse the XML
- tree = ET.ElementTree(ET.fromstring(ssmlinput))
- root = tree.getroot()
-
- # Create an empty list
- voice_list = []
-
- # Loop through all voice tags
- for voice in root.iter('{http://www.w3.org/2001/10/synthesis}voice'):
- # Extract the voice name attribute and the content text
- voice_name = voice.attrib['name']
- voice_content = voice.text.strip() if voice.text else ''
- if(len(voice_content) > 0):
- parts = split_and_recombine_text(voice_content)
- for p in parts:
- if(len(p) > 1):
- # add to tuple list
- voice_list.append((voice_name, p))
- return voice_list
-
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/speaker_batch.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/speaker_batch.py
deleted file mode 100644
index 4485605e3ece5b491d1e7d0f223c543b6c91eb96..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/speaker_encoder/data_objects/speaker_batch.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import numpy as np
-from typing import List
-from speaker_encoder.data_objects.speaker import Speaker
-
-class SpeakerBatch:
- def __init__(self, speakers: List[Speaker], utterances_per_speaker: int, n_frames: int):
- self.speakers = speakers
- self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
-
- # Array of shape (n_speakers * n_utterances, n_frames, mel_n), e.g. for 3 speakers with
- # 4 utterances each of 160 frames of 40 mel coefficients: (12, 160, 40)
- self.data = np.array([frames for s in speakers for _, frames, _ in self.partials[s]])
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/load_mats.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/load_mats.py
deleted file mode 100644
index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/load_mats.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""This script is to load 3D face model for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-from PIL import Image
-from scipy.io import loadmat, savemat
-from array import array
-import os.path as osp
-
-# load expression basis
-def LoadExpBasis(bfm_folder='BFM'):
- n_vertex = 53215
- Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb')
- exp_dim = array('i')
- exp_dim.fromfile(Expbin, 1)
- expMU = array('f')
- expPC = array('f')
- expMU.fromfile(Expbin, 3*n_vertex)
- expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex)
- Expbin.close()
-
- expPC = np.array(expPC)
- expPC = np.reshape(expPC, [exp_dim[0], -1])
- expPC = np.transpose(expPC)
-
- expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt'))
-
- return expPC, expEV
-
-
-# transfer original BFM09 to our face model
-def transferBFM09(bfm_folder='BFM'):
- print('Transfer BFM09 to BFM_model_front......')
- original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat'))
- shapePC = original_BFM['shapePC'] # shape basis
- shapeEV = original_BFM['shapeEV'] # corresponding eigen value
- shapeMU = original_BFM['shapeMU'] # mean face
- texPC = original_BFM['texPC'] # texture basis
- texEV = original_BFM['texEV'] # eigen value
- texMU = original_BFM['texMU'] # mean texture
-
- expPC, expEV = LoadExpBasis(bfm_folder)
-
- # transfer BFM09 to our face model
-
- idBase = shapePC*np.reshape(shapeEV, [-1, 199])
- idBase = idBase/1e5 # unify the scale to decimeter
- idBase = idBase[:, :80] # use only first 80 basis
-
- exBase = expPC*np.reshape(expEV, [-1, 79])
- exBase = exBase/1e5 # unify the scale to decimeter
- exBase = exBase[:, :64] # use only first 64 basis
-
- texBase = texPC*np.reshape(texEV, [-1, 199])
- texBase = texBase[:, :80] # use only first 80 basis
-
- # our face model is cropped along face landmarks and contains only 35709 vertex.
- # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex.
- # thus we select corresponding vertex to get our face model.
-
- index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat'))
- index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215)
-
- index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat'))
- index_shape = index_shape['trimIndex'].astype(
- np.int32) - 1 # starts from 0 (to 53490)
- index_shape = index_shape[index_exp]
-
- idBase = np.reshape(idBase, [-1, 3, 80])
- idBase = idBase[index_shape, :, :]
- idBase = np.reshape(idBase, [-1, 80])
-
- texBase = np.reshape(texBase, [-1, 3, 80])
- texBase = texBase[index_shape, :, :]
- texBase = np.reshape(texBase, [-1, 80])
-
- exBase = np.reshape(exBase, [-1, 3, 64])
- exBase = exBase[index_exp, :, :]
- exBase = np.reshape(exBase, [-1, 64])
-
- meanshape = np.reshape(shapeMU, [-1, 3])/1e5
- meanshape = meanshape[index_shape, :]
- meanshape = np.reshape(meanshape, [1, -1])
-
- meantex = np.reshape(texMU, [-1, 3])
- meantex = meantex[index_shape, :]
- meantex = np.reshape(meantex, [1, -1])
-
- # other info contains triangles, region used for computing photometric loss,
- # region used for skin texture regularization, and 68 landmarks index etc.
- other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat'))
- frontmask2_idx = other_info['frontmask2_idx']
- skinmask = other_info['skinmask']
- keypoints = other_info['keypoints']
- point_buf = other_info['point_buf']
- tri = other_info['tri']
- tri_mask2 = other_info['tri_mask2']
-
- # save our face model
- savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase,
- 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask})
-
-
-# load landmarks for standard face, which is used for image preprocessing
-def load_lm3d(bfm_folder):
-
- Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat'))
- Lm3D = Lm3D['lm']
-
- # calculate 5 facial landmarks using 68 landmarks
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
- Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean(
- Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0)
- Lm3D = Lm3D[[1, 2, 0, 3, 4], :]
-
- return Lm3D
-
-
-if __name__ == '__main__':
- transferBFM09()
\ No newline at end of file
diff --git a/spaces/kevinwang676/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py b/spaces/kevinwang676/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py
deleted file mode 100644
index cecd8ed8ac100b80d5087fa47f22f92c84fea032..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/FreeVC/speaker_encoder/data_objects/speaker_verification_dataset.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from speaker_encoder.data_objects.random_cycler import RandomCycler
-from speaker_encoder.data_objects.speaker_batch import SpeakerBatch
-from speaker_encoder.data_objects.speaker import Speaker
-from speaker_encoder.params_data import partials_n_frames
-from torch.utils.data import Dataset, DataLoader
-from pathlib import Path
-
-# TODO: improve with a pool of speakers for data efficiency
-
-class SpeakerVerificationDataset(Dataset):
- def __init__(self, datasets_root: Path):
- self.root = datasets_root
- speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()]
- if len(speaker_dirs) == 0:
- raise Exception("No speakers found. Make sure you are pointing to the directory "
- "containing all preprocessed speaker directories.")
- self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs]
- self.speaker_cycler = RandomCycler(self.speakers)
-
- def __len__(self):
- return int(1e10)
-
- def __getitem__(self, index):
- return next(self.speaker_cycler)
-
- def get_logs(self):
- log_string = ""
- for log_fpath in self.root.glob("*.txt"):
- with log_fpath.open("r") as log_file:
- log_string += "".join(log_file.readlines())
- return log_string
-
-
-class SpeakerVerificationDataLoader(DataLoader):
- def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None,
- batch_sampler=None, num_workers=0, pin_memory=False, timeout=0,
- worker_init_fn=None):
- self.utterances_per_speaker = utterances_per_speaker
-
- super().__init__(
- dataset=dataset,
- batch_size=speakers_per_batch,
- shuffle=False,
- sampler=sampler,
- batch_sampler=batch_sampler,
- num_workers=num_workers,
- collate_fn=self.collate,
- pin_memory=pin_memory,
- drop_last=False,
- timeout=timeout,
- worker_init_fn=worker_init_fn
- )
-
- def collate(self, speakers):
- return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
-
\ No newline at end of file
diff --git a/spaces/keyu-tian/SparK/app.py b/spaces/keyu-tian/SparK/app.py
deleted file mode 100644
index 3703e2db0009fea1686d779101b431c47248e5e9..0000000000000000000000000000000000000000
--- a/spaces/keyu-tian/SparK/app.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-
-def greet(name):
- return "Hello " + name + "!!"
-
-iface = gr.Interface(fn=greet, inputs="text", outputs="text")
-iface.launch()
diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/encoder_preprocess.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/encoder_preprocess.py
deleted file mode 100644
index 853c6cb6c5cdda5c2e53ce3370d2570f2925f01a..0000000000000000000000000000000000000000
--- a/spaces/kira4424/Tacotron-zero-short-voice-clone/encoder_preprocess.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from encoder.preprocess import preprocess_librispeech, preprocess_voxceleb1, preprocess_voxceleb2, preprocess_aidatatang_200zh
-from utils.argutils import print_args
-from pathlib import Path
-import argparse
-
-if __name__ == "__main__":
- class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter):
- pass
-
- parser = argparse.ArgumentParser(
- description="Preprocesses audio files from datasets, encodes them as mel spectrograms and "
- "writes them to the disk. This will allow you to train the encoder. The "
- "datasets required are at least one of LibriSpeech, VoxCeleb1, VoxCeleb2, aidatatang_200zh. ",
- formatter_class=MyFormatter
- )
- parser.add_argument("datasets_root", type=Path, help=\
- "Path to the directory containing your LibriSpeech/TTS and VoxCeleb datasets.")
- parser.add_argument("-o", "--out_dir", type=Path, default=argparse.SUPPRESS, help=\
- "Path to the output directory that will contain the mel spectrograms. If left out, "
- "defaults to /SV2TTS/encoder/")
- parser.add_argument("-d", "--datasets", type=str,
- default="librispeech_other,voxceleb1,aidatatang_200zh", help=\
- "Comma-separated list of the name of the datasets you want to preprocess. Only the train "
- "set of these datasets will be used. Possible names: librispeech_other, voxceleb1, "
- "voxceleb2.")
- parser.add_argument("-s", "--skip_existing", action="store_true", help=\
- "Whether to skip existing output files with the same name. Useful if this script was "
- "interrupted.")
- parser.add_argument("--no_trim", action="store_true", help=\
- "Preprocess audio without trimming silences (not recommended).")
- args = parser.parse_args()
-
- # Verify webrtcvad is available
- if not args.no_trim:
- try:
- import webrtcvad
- except:
- raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables "
- "noise removal and is recommended. Please install and try again. If installation fails, "
- "use --no_trim to disable this error message.")
- del args.no_trim
-
- # Process the arguments
- args.datasets = args.datasets.split(",")
- if not hasattr(args, "out_dir"):
- args.out_dir = args.datasets_root.joinpath("SV2TTS", "encoder")
- assert args.datasets_root.exists()
- args.out_dir.mkdir(exist_ok=True, parents=True)
-
- # Preprocess the datasets
- print_args(args, parser)
- preprocess_func = {
- "librispeech_other": preprocess_librispeech,
- "voxceleb1": preprocess_voxceleb1,
- "voxceleb2": preprocess_voxceleb2,
- "aidatatang_200zh": preprocess_aidatatang_200zh,
- }
- args = vars(args)
- for dataset in args.pop("datasets"):
- print("Preprocessing %s" % dataset)
- preprocess_func[dataset](**args)
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/da_head.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/da_head.py
deleted file mode 100644
index 5cd49fcfdc7c0a70f9485cc71843dcf3e0cb1774..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/da_head.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import torch
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, Scale
-from torch import nn
-
-from annotator.uniformer.mmseg.core import add_prefix
-from ..builder import HEADS
-from ..utils import SelfAttentionBlock as _SelfAttentionBlock
-from .decode_head import BaseDecodeHead
-
-
-class PAM(_SelfAttentionBlock):
- """Position Attention Module (PAM)
-
- Args:
- in_channels (int): Input channels of key/query feature.
- channels (int): Output channels of key/query transform.
- """
-
- def __init__(self, in_channels, channels):
- super(PAM, self).__init__(
- key_in_channels=in_channels,
- query_in_channels=in_channels,
- channels=channels,
- out_channels=in_channels,
- share_key_query=False,
- query_downsample=None,
- key_downsample=None,
- key_query_num_convs=1,
- key_query_norm=False,
- value_out_num_convs=1,
- value_out_norm=False,
- matmul_norm=False,
- with_out=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None)
-
- self.gamma = Scale(0)
-
- def forward(self, x):
- """Forward function."""
- out = super(PAM, self).forward(x, x)
-
- out = self.gamma(out) + x
- return out
-
-
-class CAM(nn.Module):
- """Channel Attention Module (CAM)"""
-
- def __init__(self):
- super(CAM, self).__init__()
- self.gamma = Scale(0)
-
- def forward(self, x):
- """Forward function."""
- batch_size, channels, height, width = x.size()
- proj_query = x.view(batch_size, channels, -1)
- proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1)
- energy = torch.bmm(proj_query, proj_key)
- energy_new = torch.max(
- energy, -1, keepdim=True)[0].expand_as(energy) - energy
- attention = F.softmax(energy_new, dim=-1)
- proj_value = x.view(batch_size, channels, -1)
-
- out = torch.bmm(attention, proj_value)
- out = out.view(batch_size, channels, height, width)
-
- out = self.gamma(out) + x
- return out
-
-
-@HEADS.register_module()
-class DAHead(BaseDecodeHead):
- """Dual Attention Network for Scene Segmentation.
-
- This head is the implementation of `DANet
- `_.
-
- Args:
- pam_channels (int): The channels of Position Attention Module(PAM).
- """
-
- def __init__(self, pam_channels, **kwargs):
- super(DAHead, self).__init__(**kwargs)
- self.pam_channels = pam_channels
- self.pam_in_conv = ConvModule(
- self.in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.pam = PAM(self.channels, pam_channels)
- self.pam_out_conv = ConvModule(
- self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.pam_conv_seg = nn.Conv2d(
- self.channels, self.num_classes, kernel_size=1)
-
- self.cam_in_conv = ConvModule(
- self.in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.cam = CAM()
- self.cam_out_conv = ConvModule(
- self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.cam_conv_seg = nn.Conv2d(
- self.channels, self.num_classes, kernel_size=1)
-
- def pam_cls_seg(self, feat):
- """PAM feature classification."""
- if self.dropout is not None:
- feat = self.dropout(feat)
- output = self.pam_conv_seg(feat)
- return output
-
- def cam_cls_seg(self, feat):
- """CAM feature classification."""
- if self.dropout is not None:
- feat = self.dropout(feat)
- output = self.cam_conv_seg(feat)
- return output
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- pam_feat = self.pam_in_conv(x)
- pam_feat = self.pam(pam_feat)
- pam_feat = self.pam_out_conv(pam_feat)
- pam_out = self.pam_cls_seg(pam_feat)
-
- cam_feat = self.cam_in_conv(x)
- cam_feat = self.cam(cam_feat)
- cam_feat = self.cam_out_conv(cam_feat)
- cam_out = self.cam_cls_seg(cam_feat)
-
- feat_sum = pam_feat + cam_feat
- pam_cam_out = self.cls_seg(feat_sum)
-
- return pam_cam_out, pam_out, cam_out
-
- def forward_test(self, inputs, img_metas, test_cfg):
- """Forward function for testing, only ``pam_cam`` is used."""
- return self.forward(inputs)[0]
-
- def losses(self, seg_logit, seg_label):
- """Compute ``pam_cam``, ``pam``, ``cam`` loss."""
- pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit
- loss = dict()
- loss.update(
- add_prefix(
- super(DAHead, self).losses(pam_cam_seg_logit, seg_label),
- 'pam_cam'))
- loss.update(
- add_prefix(
- super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam'))
- loss.update(
- add_prefix(
- super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam'))
- return loss
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/jupyter.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/jupyter.py
deleted file mode 100644
index 782fa86399d0ae7e4abaf5bad590f6a67f1a4f08..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/jupyter.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import base64
-import io
-import re
-
-import requests
-
-import fsspec
-
-
-class JupyterFileSystem(fsspec.AbstractFileSystem):
- """View of the files as seen by a Jupyter server (notebook or lab)"""
-
- protocol = ("jupyter", "jlab")
-
- def __init__(self, url, tok=None, **kwargs):
- """
-
- Parameters
- ----------
- url : str
- Base URL of the server, like "http://127.0.0.1:8888". May include
- token in the string, which is given by the process when starting up
- tok : str
- If the token is obtained separately, can be given here
- kwargs
- """
- if "?" in url:
- if tok is None:
- try:
- tok = re.findall("token=([a-z0-9]+)", url)[0]
- except IndexError as e:
- raise ValueError("Could not determine token") from e
- url = url.split("?", 1)[0]
- self.url = url.rstrip("/") + "/api/contents"
- self.session = requests.Session()
- if tok:
- self.session.headers["Authorization"] = f"token {tok}"
-
- super().__init__(**kwargs)
-
- def ls(self, path, detail=True, **kwargs):
- path = self._strip_protocol(path)
- r = self.session.get(self.url + "/" + path)
- if r.status_code == 404:
- return FileNotFoundError(path)
- r.raise_for_status()
- out = r.json()
-
- if out["type"] == "directory":
- out = out["content"]
- else:
- out = [out]
- for o in out:
- o["name"] = o.pop("path")
- o.pop("content")
- if o["type"] == "notebook":
- o["type"] = "file"
- if detail:
- return out
- return [o["name"] for o in out]
-
- def cat_file(self, path, start=None, end=None, **kwargs):
- path = self._strip_protocol(path)
- r = self.session.get(self.url + "/" + path)
- if r.status_code == 404:
- return FileNotFoundError(path)
- r.raise_for_status()
- out = r.json()
- if out["format"] == "text":
- # data should be binary
- b = out["content"].encode()
- else:
- b = base64.b64decode(out["content"])
- return b[start:end]
-
- def pipe_file(self, path, value, **_):
- path = self._strip_protocol(path)
- json = {
- "name": path.rsplit("/", 1)[-1],
- "path": path,
- "size": len(value),
- "content": base64.b64encode(value).decode(),
- "format": "base64",
- "type": "file",
- }
- self.session.put(self.url + "/" + path, json=json)
-
- def mkdir(self, path, create_parents=True, **kwargs):
- path = self._strip_protocol(path)
- if create_parents and "/" in path:
- self.mkdir(path.rsplit("/", 1)[0], True)
- json = {
- "name": path.rsplit("/", 1)[-1],
- "path": path,
- "size": None,
- "content": None,
- "type": "directory",
- }
- self.session.put(self.url + "/" + path, json=json)
-
- def _rm(self, path):
- path = self._strip_protocol(path)
- self.session.delete(self.url + "/" + path)
-
- def _open(self, path, mode="rb", **kwargs):
- path = self._strip_protocol(path)
- if mode == "rb":
- data = self.cat_file(path)
- return io.BytesIO(data)
- else:
- return SimpleFileWriter(self, path, mode="wb")
-
-
-class SimpleFileWriter(fsspec.spec.AbstractBufferedFile):
- def _upload_chunk(self, final=False):
- """Never uploads a chunk until file is done
-
- Not suitable for large files
- """
- if final is False:
- return False
- self.buffer.seek(0)
- data = self.buffer.read()
- self.fs.pipe_file(self.path, data)
diff --git a/spaces/leogabraneth/text-generation-webui-main/update_macos.sh b/spaces/leogabraneth/text-generation-webui-main/update_macos.sh
deleted file mode 100644
index 371db554a33f53f3bd3c5bf15fedeaf2f6812639..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/update_macos.sh
+++ /dev/null
@@ -1,26 +0,0 @@
-#!/bin/bash
-
-cd "$(dirname "${BASH_SOURCE[0]}")"
-
-if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi
-
-# deactivate existing conda envs as needed to avoid conflicts
-{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null
-
-# config
-CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda"
-INSTALL_ENV_DIR="$(pwd)/installer_files/env"
-
-# environment isolation
-export PYTHONNOUSERSITE=1
-unset PYTHONPATH
-unset PYTHONHOME
-export CUDA_PATH="$INSTALL_ENV_DIR"
-export CUDA_HOME="$CUDA_PATH"
-
-# activate installer env
-source "$CONDA_ROOT_PREFIX/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script)
-conda activate "$INSTALL_ENV_DIR"
-
-# update installer env
-python one_click.py --update && echo -e "\nDone!"
diff --git a/spaces/lewispons/GrammarGuru/src/models/utils/mlutilities.py b/spaces/lewispons/GrammarGuru/src/models/utils/mlutilities.py
deleted file mode 100644
index 911460a166eceb45e6f7c2c7fafd189a29c6bead..0000000000000000000000000000000000000000
--- a/spaces/lewispons/GrammarGuru/src/models/utils/mlutilities.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import pandas as pd
-from gensim.corpora import Dictionary
-from gensim.similarities import SparseMatrixSimilarity
-from gensim.models import TfidfModel
-from gensim.parsing import strip_tags, strip_numeric, \
- strip_multiple_whitespaces, stem_text, strip_punctuation, \
- remove_stopwords, preprocess_string
-
-from re import sub
-from typing import List
-from functools import cache
-
-
-transform_to_lower = lambda s: s.lower()
-remove_single_char = lambda s: sub(r'\s+\w{1}\s+', '', s)
-
-cleaning_filters = [
- strip_tags,
- strip_numeric,
- strip_punctuation,
- strip_multiple_whitespaces,
- transform_to_lower,
- remove_stopwords,
- remove_single_char
-]
-
-def gensim_tokenizer(docs: List[str]):
- """
- Tokenizes a list of strings using a series of cleaning filters.
-
- Args:
- docs (List[str]): A list of strings to be tokenized.
-
- Returns:
- List[List[str]]: A list of tokenized documents, where each document is represented as a list of tokens.
- """
- tokenized_docs = list()
- for doc in docs:
- processed_words = preprocess_string(doc, cleaning_filters)
- tokenized_docs.append(processed_words)
-
- return tokenized_docs
-
-
-def cleaning_pipe(document):
- """
- Applies a series of cleaning steps to a document.
-
- Args:
- document (str): The document to be cleaned.
-
- Returns:
- list: A list of processed words after applying the cleaning filters.
- """
- # Invoking gensim.parsing.preprocess_string method with set of filters
- processed_words = preprocess_string(document, cleaning_filters)
- return processed_words
-
-
-def get_closest_n(dictionary: Dictionary, index: SparseMatrixSimilarity, tfidf_model : TfidfModel, query: str, n: int):
- '''
- Retrieves the top matching documents as per cosine similarity
- between the TF-IDF vector of the query and all documents.
-
- Args:
- query (str): The query string to find matching documents.
- n (int): The number of closest documents to retrieve.
-
- Returns:
- numpy.ndarray: An array of indices representing the top matching documents.
- '''
- # Clean the query document using cleaning_pipe function
- query_document = cleaning_pipe(query)
-
- # Convert the query document to bag-of-words representation
- query_bow = dictionary.doc2bow(query_document)
-
- # Calculate similarity scores between the query and all documents using TF-IDF model
- sims = index[tfidf_model[query_bow]]
-
- # Get the indices of the top n closest documents based on similarity scores
- top_idx = sims.argsort()[-1 * n:][::-1]
-
- return top_idx
-
-
-def get_recomendations_metadata(query: str, df: pd.DataFrame, n: int,
- dictionary: Dictionary, index: SparseMatrixSimilarity,
- tfidf_model : TfidfModel) -> pd.DataFrame:
- '''
- Retrieves metadata recommendations based on a query using cosine similarity.
-
- Args:
- query (str): The query string for which recommendations are sought.
- n (int): The number of recommendations to retrieve.
- df (pd.DataFrame): The DataFrame containing metadata information.
-
- Returns:
- pd.DataFrame: A DataFrame containing the recommended metadata, reset with a new index.
- '''
- # Get the indices of the closest matching documents based on the query
- recommendations_idxs = get_closest_n(dictionary, index, tfidf_model, query, n)
-
- # Retrieve the recommended metadata rows from the DataFrame based on the indices
- recommendations_metadata = df.iloc[recommendations_idxs]
-
- # Reset the index of the recommended metadata DataFrame
- recommendations_metadata = recommendations_metadata.reset_index(drop=True)
-
- return recommendations_metadata
- # return recommendations_idxs
-
-@cache
-def load_arxiv_parquet(path: str):
- df = pd.read_parquet(path)
- return df
-
-@cache
-def load_dict(path: str):
- dict_corpus = Dictionary.load(path)
- return dict_corpus
-
-@cache
-def load_model(path: str ):
- tfidf_model = TfidfModel.load(path)
- return tfidf_model
-
-@cache
-def load_sparse_matrix(path: str):
- similarities = SparseMatrixSimilarity.load(path)
- return similarities
\ No newline at end of file
diff --git a/spaces/lewiswu1209/MockingBird/vocoder/hifigan/utils.py b/spaces/lewiswu1209/MockingBird/vocoder/hifigan/utils.py
deleted file mode 100644
index e67cbcda0744201d8342b212160808b7c934ea64..0000000000000000000000000000000000000000
--- a/spaces/lewiswu1209/MockingBird/vocoder/hifigan/utils.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????.pt')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/lighdow/anime-cute-tts/modules.py b/spaces/lighdow/anime-cute-tts/modules.py
deleted file mode 100644
index 3484f6a1f4c1c06855c37a1ff4e66c58864acb38..0000000000000000000000000000000000000000
--- a/spaces/lighdow/anime-cute-tts/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dilated and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Edius Pro 7 Full Software Free Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Edius Pro 7 Full Software Free Download.md
deleted file mode 100644
index 86987aeb21d9b42469a5b6bba8b4cd3a3f891ff6..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Edius Pro 7 Full Software Free Download.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-How to Download Edius Pro 7 Full Software for Free
-Edius Pro 7 is a powerful video editing software that allows you to create professional-quality videos with ease. Whether you are a beginner or an expert, Edius Pro 7 can handle any type of project, from simple cuts to complex effects. You can edit videos in any format, from SD to 4K, and export them to various devices and platforms.
-Edius pro 7 full software free download DOWNLOAD 🔗 https://bytlly.com/2uGyq5
-But how can you get Edius Pro 7 full software for free? Is there a safe and legal way to download it without paying anything? In this article, we will show you how to download Edius Pro 7 full software for free from the official website of Edius. We will also provide you with some tips and tricks to make the most out of this amazing software.
-Step 1: Visit the Official Website of Edius
-The first step to download Edius Pro 7 full software for free is to visit the official website of Edius at https://www.edius.net/ . Here you will find all the information you need about Edius Pro 7, such as its features, specifications, system requirements, tutorials, and more. You will also see a button that says "Download Free Trial". Click on it to proceed to the next step.
-
-Step 2: Fill Out the Registration Form
-The next step to download Edius Pro 7 full software for free is to fill out the registration form that appears on the screen. You will need to provide some basic information, such as your name, email address, country, and language. You will also need to agree to the terms and conditions of the trial version. After filling out the form, click on "Submit" to receive an email with a download link and a serial number.
-
-Step 3: Download and Install Edius Pro 7 Full Software
-The final step to download Edius Pro 7 full software for free is to download and install it on your computer. To do this, open the email that you received from Edius and click on the download link. This will take you to a page where you can choose between a 64-bit or a 32-bit version of Edius Pro 7. Choose the one that matches your system and click on "Download". The file size is about 500 MB, so it may take some time depending on your internet speed.
-Once the download is complete, open the file and follow the instructions on the screen to install Edius Pro 7 on your computer. You will need to enter the serial number that you received in the email during the installation process. After the installation is complete, you can launch Edius Pro 7 from your desktop or start menu.
-
-Congratulations! You Have Successfully Downloaded Edius Pro 7 Full Software for Free
-You have now successfully downloaded Edius Pro 7 full software for free from the official website of Edius. You can use it for 30 days without any limitations or restrictions. You can edit any video you want with this powerful software and create stunning results.
-
-However, if you want to continue using Edius Pro 7 after the trial period expires, you will need to purchase a license from Edius or one of its authorized dealers. The price of Edius Pro 7 is $699 USD for a perpetual license or $199 USD for an annual subscription. You can also upgrade from previous versions of Edius at a discounted price.
-If you are interested in buying Edius Pro 7, you can visit https://www
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Gambit 2.4.6.torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Gambit 2.4.6.torrent.md
deleted file mode 100644
index 5ee5edfadff79c946b5912a2d514d146443ac448..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Gambit 2.4.6.torrent.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-the gambit is an ideal device for fluid mechanics, and plan to make its fluid the best in the business. one can likewise change and modify the geometric structures made by other cfd designers and not just recreate them. the gambit ui is plainly helpful when you are introducing or changing the limit or solid. gambit can create a lot of a mesh by the cfd mold or by the plan of your cad/cae mold.
-one of the fastest growing video game genres is the mmorpg, or massively multiplayer online role playing game. gambit can also be used for these types of games. an mmorpg is a game where you play as an avatar, a character in a world, or a mmo of characters. gambit can be used to create the land, the terrain, the landscape, the people, and the animals of an mmorpg game, and thus can be used to create a world for an mmorpg in gambit .
-Gambit 2.4.6.torrent DOWNLOAD ::: https://bytlly.com/2uGxJ1
-gambit 2.4.6.torrent contains many tools that are commonly used in the geometrical design business, and the math tools in it are often similar to the ones used in engineering, both civil and mechanical, so gambit can be used for most types of design work.
-this version of gambit contains a few new options and some minor improvements. the most important change is the addition of the autocad file format. that change is described on the gambit software page.
-we are not responsible for any damage that might result from using this download. there are many different versions of gambit on the internet. some of them are good, some are bad, and some are both good and bad. we have tried to make this version of gambit as good as possible. if you have problems using this version of gambit , try to download another version and see if it works better. if you can not get it to work, then you may need to try to install gambit again.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py b/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py
deleted file mode 100644
index 93d0701c0094517cec147c382b005e8063938548..0000000000000000000000000000000000000000
--- a/spaces/lithiumice/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "r100"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/unique.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/unique.h
deleted file mode 100644
index c2aff4c6489ccf47e76288ffd7c5afe7c43b2dc0..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/unique.h
+++ /dev/null
@@ -1,801 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-template
-__host__ __device__ ForwardIterator
-unique(
- const thrust::detail::execution_policy_base &exec,
- ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate binary_pred);
-
-template
-__host__ __device__ OutputIterator
-unique_copy(
- const thrust::detail::execution_policy_base &exec,
- InputIterator first,
- InputIterator last,
- OutputIterator result,
- BinaryPredicate binary_pred);
-
-namespace cuda_cub {
-
-// XXX it should be possible to unify unique & unique_by_key into a single
-// agent with various specializations, similar to what is done
-// with partition
-namespace __unique {
-
- template
- struct PtxPolicy
- {
- enum
- {
- BLOCK_THREADS = _BLOCK_THREADS,
- ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
- ITEMS_PER_TILE = _BLOCK_THREADS * _ITEMS_PER_THREAD,
- };
- static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM;
- static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER;
- static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM;
- }; // struct PtxPolicy
-
- template
- struct Tuning;
-
- namespace mpl = thrust::detail::mpl::math;
-
- template
- struct items_per_thread
- {
- enum
- {
- value = mpl::min<
- int,
- NOMINAL_4B_ITEMS_PER_THREAD,
- mpl::max::value>::value
- };
- };
-
- template
- struct Tuning
- {
- const static int INPUT_SIZE = sizeof(T);
- enum
- {
- NOMINAL_4B_ITEMS_PER_THREAD = 11,
- //
- ITEMS_PER_THREAD = items_per_thread::value
- };
-
- typedef PtxPolicy<64,
- ITEMS_PER_THREAD,
- cub::BLOCK_LOAD_WARP_TRANSPOSE,
- cub::LOAD_LDG,
- cub::BLOCK_SCAN_WARP_SCANS>
- type;
- }; // Tuning for sm52
-
-
- template
- struct Tuning
- {
- const static int INPUT_SIZE = sizeof(T);
- enum
- {
- NOMINAL_4B_ITEMS_PER_THREAD = 9,
- //
- ITEMS_PER_THREAD = items_per_thread::value
- };
-
- typedef PtxPolicy<128,
- ITEMS_PER_THREAD,
- cub::BLOCK_LOAD_WARP_TRANSPOSE,
- cub::LOAD_LDG,
- cub::BLOCK_SCAN_WARP_SCANS>
- type;
- }; // Tuning for sm35
-
- template
- struct Tuning
- {
- const static int INPUT_SIZE = sizeof(T);
- enum
- {
- NOMINAL_4B_ITEMS_PER_THREAD = 7,
- //
- ITEMS_PER_THREAD = items_per_thread::value
- };
-
- typedef PtxPolicy<128,
- ITEMS_PER_THREAD,
- cub::BLOCK_LOAD_WARP_TRANSPOSE,
- cub::LOAD_DEFAULT,
- cub::BLOCK_SCAN_WARP_SCANS>
- type;
- }; // Tuning for sm30
-
- template
- struct UniqueAgent
- {
- typedef typename iterator_traits::value_type item_type;
-
- typedef cub::ScanTileState ScanTileState;
-
- template
- struct PtxPlan : Tuning::type
- {
- typedef Tuning tuning;
-
- typedef typename core::LoadIterator::type ItemsLoadIt;
-
- typedef typename core::BlockLoad::type BlockLoadItems;
-
- typedef cub::BlockDiscontinuity
- BlockDiscontinuityItems;
-
- typedef cub::TilePrefixCallbackOp
- TilePrefixCallback;
- typedef cub::BlockScan
- BlockScan;
-
- typedef core::uninitialized_array
- shared_items_t;
-
- union TempStorage
- {
- struct
- {
- typename BlockScan::TempStorage scan;
- typename TilePrefixCallback::TempStorage prefix;
- typename BlockDiscontinuityItems::TempStorage discontinuity;
- };
-
- typename BlockLoadItems::TempStorage load_items;
- shared_items_t shared_items;
-
- }; // union TempStorage
- }; // struct PtxPlan
-
- typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan;
-
- typedef typename ptx_plan::ItemsLoadIt ItemsLoadIt;
- typedef typename ptx_plan::BlockLoadItems BlockLoadItems;
- typedef typename ptx_plan::BlockDiscontinuityItems BlockDiscontinuityItems;
- typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback;
- typedef typename ptx_plan::BlockScan BlockScan;
- typedef typename ptx_plan::shared_items_t shared_items_t;
- typedef typename ptx_plan::TempStorage TempStorage;
-
- enum
- {
- BLOCK_THREADS = ptx_plan::BLOCK_THREADS,
- ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
- ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE
- };
-
- struct impl
- {
- //---------------------------------------------------------------------
- // Per-thread fields
- //---------------------------------------------------------------------
-
- TempStorage & temp_storage;
- ScanTileState & tile_state;
- ItemsLoadIt items_in;
- ItemsOutputIt items_out;
- cub::InequalityWrapper predicate;
- Size num_items;
-
- //---------------------------------------------------------------------
- // Utility functions
- //---------------------------------------------------------------------
-
- THRUST_DEVICE_FUNCTION
- shared_items_t &get_shared()
- {
- return temp_storage.shared_items;
- }
-
- void THRUST_DEVICE_FUNCTION
- scatter(item_type (&items)[ITEMS_PER_THREAD],
- Size (&selection_flags)[ITEMS_PER_THREAD],
- Size (&selection_indices)[ITEMS_PER_THREAD],
- int /*num_tile_items*/,
- int num_tile_selections,
- Size num_selections_prefix,
- Size /*num_selections*/)
- {
- using core::sync_threadblock;
-
-#pragma unroll
- for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
- {
- int local_scatter_offset = selection_indices[ITEM] -
- num_selections_prefix;
- if (selection_flags[ITEM])
- {
- get_shared()[local_scatter_offset] = items[ITEM];
- }
- }
-
- sync_threadblock();
-
- for (int item = threadIdx.x;
- item < num_tile_selections;
- item += BLOCK_THREADS)
- {
- items_out[num_selections_prefix + item] = get_shared()[item];
- }
-
- sync_threadblock();
- }
-
- //---------------------------------------------------------------------
- // Tile processing
- //---------------------------------------------------------------------
-
- template
- Size THRUST_DEVICE_FUNCTION
- consume_tile_impl(int num_tile_items,
- int tile_idx,
- Size tile_base)
- {
- using core::sync_threadblock;
- using core::uninitialized_array;
-
- item_type items_loc[ITEMS_PER_THREAD];
- Size selection_flags[ITEMS_PER_THREAD];
- Size selection_idx[ITEMS_PER_THREAD];
-
- if (IS_LAST_TILE)
- {
- BlockLoadItems(temp_storage.load_items)
- .Load(items_in + tile_base,
- items_loc,
- num_tile_items,
- *(items_in + tile_base));
- }
- else
- {
- BlockLoadItems(temp_storage.load_items)
- .Load(items_in + tile_base, items_loc);
- }
-
-
- sync_threadblock();
-
- if (IS_FIRST_TILE)
- {
- BlockDiscontinuityItems(temp_storage.discontinuity)
- .FlagHeads(selection_flags, items_loc, predicate);
- }
- else
- {
- item_type tile_predecessor = items_in[tile_base - 1];
- BlockDiscontinuityItems(temp_storage.discontinuity)
- .FlagHeads(selection_flags, items_loc, predicate, tile_predecessor);
- }
-
-#pragma unroll
- for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
- {
- // Set selection_flags for out-of-bounds items
- if ((IS_LAST_TILE) &&
- (Size(threadIdx.x * ITEMS_PER_THREAD) + ITEM >= num_tile_items))
- selection_flags[ITEM] = 1;
- }
-
- sync_threadblock();
-
- Size num_tile_selections = 0;
- Size num_selections = 0;
- Size num_selections_prefix = 0;
- if (IS_FIRST_TILE)
- {
- BlockScan(temp_storage.scan)
- .ExclusiveSum(selection_flags,
- selection_idx,
- num_tile_selections);
-
- if (threadIdx.x == 0)
- {
- // Update tile status if this is not the last tile
- if (!IS_LAST_TILE)
- tile_state.SetInclusive(0, num_tile_selections);
- }
-
- // Do not count any out-of-bounds selections
- if (IS_LAST_TILE)
- {
- int num_discount = ITEMS_PER_TILE - num_tile_items;
- num_tile_selections -= num_discount;
- }
- num_selections = num_tile_selections;
- }
- else
- {
- TilePrefixCallback prefix_cb(tile_state,
- temp_storage.prefix,
- cub::Sum(),
- tile_idx);
- BlockScan(temp_storage.scan)
- .ExclusiveSum(selection_flags,
- selection_idx,
- prefix_cb);
-
- num_selections = prefix_cb.GetInclusivePrefix();
- num_tile_selections = prefix_cb.GetBlockAggregate();
- num_selections_prefix = prefix_cb.GetExclusivePrefix();
-
- if (IS_LAST_TILE)
- {
- int num_discount = ITEMS_PER_TILE - num_tile_items;
- num_tile_selections -= num_discount;
- num_selections -= num_discount;
- }
- }
-
- sync_threadblock();
-
- scatter(items_loc,
- selection_flags,
- selection_idx,
- num_tile_items,
- num_tile_selections,
- num_selections_prefix,
- num_selections);
-
- return num_selections;
- }
-
-
- template
- Size THRUST_DEVICE_FUNCTION
- consume_tile(int num_tile_items,
- int tile_idx,
- Size tile_base)
- {
- if (tile_idx == 0)
- {
- return consume_tile_impl(num_tile_items,
- tile_idx,
- tile_base);
- }
- else
- {
- return consume_tile_impl(num_tile_items,
- tile_idx,
- tile_base);
- }
- }
-
- //---------------------------------------------------------------------
- // Constructor
- //---------------------------------------------------------------------
-
- THRUST_DEVICE_FUNCTION
- impl(TempStorage & temp_storage_,
- ScanTileState & tile_state_,
- ItemsLoadIt items_in_,
- ItemsOutputIt items_out_,
- BinaryPred binary_pred_,
- Size num_items_,
- int num_tiles,
- NumSelectedOutIt num_selected_out)
- : temp_storage(temp_storage_),
- tile_state(tile_state_),
- items_in(items_in_),
- items_out(items_out_),
- predicate(binary_pred_),
- num_items(num_items_)
- {
- int tile_idx = blockIdx.x;
- Size tile_base = tile_idx * ITEMS_PER_TILE;
-
- if (tile_idx < num_tiles - 1)
- {
- consume_tile(ITEMS_PER_TILE,
- tile_idx,
- tile_base);
- }
- else
- {
- int num_remaining = static_cast(num_items - tile_base);
- Size num_selections = consume_tile(num_remaining,
- tile_idx,
- tile_base);
- if (threadIdx.x == 0)
- {
- *num_selected_out = num_selections;
- }
- }
- }
- }; // struct impl
-
- //---------------------------------------------------------------------
- // Agent entry point
- //---------------------------------------------------------------------
-
- THRUST_AGENT_ENTRY(ItemsIt items_in,
- ItemsOutputIt items_out,
- BinaryPred binary_pred,
- NumSelectedOutIt num_selected_out,
- Size num_items,
- ScanTileState tile_state,
- int num_tiles,
- char * shmem)
- {
- TempStorage &storage = *reinterpret_cast(shmem);
-
- impl(storage,
- tile_state,
- core::make_load_iterator(ptx_plan(), items_in),
- items_out,
- binary_pred,
- num_items,
- num_tiles,
- num_selected_out);
- }
- }; // struct UniqueAgent
-
- template
- struct InitAgent
- {
- template
- struct PtxPlan : PtxPolicy<128> {};
- typedef core::specialize_plan ptx_plan;
-
- //---------------------------------------------------------------------
- // Agent entry point
- //---------------------------------------------------------------------
-
- THRUST_AGENT_ENTRY(ScanTileState tile_state,
- Size num_tiles,
- NumSelectedIt num_selected_out,
- char * /*shmem*/)
- {
- tile_state.InitializeStatus(num_tiles);
- if (blockIdx.x == 0 && threadIdx.x == 0)
- *num_selected_out = 0;
- }
-
- }; // struct InitAgent
-
- template
- static cudaError_t THRUST_RUNTIME_FUNCTION
- doit_step(void * d_temp_storage,
- size_t & temp_storage_bytes,
- ItemsInputIt items_in,
- ItemsOutputIt items_out,
- BinaryPred binary_pred,
- NumSelectedOutIt num_selected_out,
- Size num_items,
- cudaStream_t stream,
- bool debug_sync)
- {
- using core::AgentLauncher;
- using core::AgentPlan;
- using core::get_agent_plan;
-
- typedef AgentLauncher<
- UniqueAgent >
- unique_agent;
-
- typedef typename unique_agent::ScanTileState ScanTileState;
-
- typedef AgentLauncher<
- InitAgent >
- init_agent;
-
- using core::get_plan;
- typename get_plan::type init_plan = init_agent::get_plan();
- typename get_plan::type unique_plan = unique_agent::get_plan(stream);
-
-
- int tile_size = unique_plan.items_per_tile;
- size_t num_tiles = (num_items + tile_size - 1) / tile_size;
-
- size_t vshmem_size = core::vshmem_size(unique_plan.shared_memory_size,
- num_tiles);
-
- cudaError_t status = cudaSuccess;
- size_t allocation_sizes[2] = {0, vshmem_size};
- status = ScanTileState::AllocationSize(static_cast(num_tiles), allocation_sizes[0]);
- CUDA_CUB_RET_IF_FAIL(status);
-
- void *allocations[2] = {NULL, NULL};
- //
- status = cub::AliasTemporaries(d_temp_storage,
- temp_storage_bytes,
- allocations,
- allocation_sizes);
- CUDA_CUB_RET_IF_FAIL(status);
-
- if (d_temp_storage == NULL)
- {
- return status;
- }
-
- ScanTileState tile_status;
- status = tile_status.Init(static_cast(num_tiles), allocations[0], allocation_sizes[0]);
- CUDA_CUB_RET_IF_FAIL(status);
-
- num_tiles = max(1,num_tiles);
- init_agent ia(init_plan, num_tiles, stream, "unique_by_key::init_agent", debug_sync);
- ia.launch(tile_status, num_tiles, num_selected_out);
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
-
- if (num_items == 0) { return status; }
-
- char *vshmem_ptr = vshmem_size > 0 ? (char *)allocations[1] : NULL;
-
- unique_agent ua(unique_plan, num_items, stream, vshmem_ptr, "unique_by_key::unique_agent", debug_sync);
- ua.launch(items_in,
- items_out,
- binary_pred,
- num_selected_out,
- num_items,
- tile_status,
- num_tiles);
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
- return status;
- }
-
- template
- THRUST_RUNTIME_FUNCTION
- ItemsOutputIt unique(execution_policy& policy,
- ItemsInputIt items_first,
- ItemsInputIt items_last,
- ItemsOutputIt items_result,
- BinaryPred binary_pred)
- {
- // typedef typename iterator_traits::difference_type size_type;
- typedef int size_type;
-
- size_type num_items = static_cast(thrust::distance(items_first, items_last));
- size_t temp_storage_bytes = 0;
- cudaStream_t stream = cuda_cub::stream(policy);
- bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
-
- cudaError_t status;
- status = doit_step(NULL,
- temp_storage_bytes,
- items_first,
- items_result,
- binary_pred,
- reinterpret_cast(NULL),
- num_items,
- stream,
- debug_sync);
- cuda_cub::throw_on_error(status, "unique: failed on 1st step");
-
- size_t allocation_sizes[2] = {sizeof(size_type), temp_storage_bytes};
- void * allocations[2] = {NULL, NULL};
-
- size_t storage_size = 0;
- status = core::alias_storage(NULL,
- storage_size,
- allocations,
- allocation_sizes);
- cuda_cub::throw_on_error(status, "unique: failed on 1st step");
-
- // Allocate temporary storage.
- thrust::detail::temporary_array
- tmp(policy, storage_size);
- void *ptr = static_cast(tmp.data().get());
-
- status = core::alias_storage(ptr,
- storage_size,
- allocations,
- allocation_sizes);
- cuda_cub::throw_on_error(status, "unique: failed on 2nd step");
-
- size_type* d_num_selected_out
- = thrust::detail::aligned_reinterpret_cast(allocations[0]);
-
- status = doit_step(allocations[1],
- temp_storage_bytes,
- items_first,
- items_result,
- binary_pred,
- d_num_selected_out,
- num_items,
- stream,
- debug_sync);
- cuda_cub::throw_on_error(status, "unique: failed on 2nd step");
-
- status = cuda_cub::synchronize(policy);
- cuda_cub::throw_on_error(status, "unique: failed to synchronize");
-
- size_type num_selected = get_value(policy, d_num_selected_out);
-
- return items_result + num_selected;
- }
-} // namespace __unique
-
-//-------------------------
-// Thrust API entry points
-//-------------------------
-
-__thrust_exec_check_disable__
-template
-OutputIt __host__ __device__
-unique_copy(execution_policy &policy,
- InputIt first,
- InputIt last,
- OutputIt result,
- BinaryPred binary_pred)
-{
- OutputIt ret = result;
- if (__THRUST_HAS_CUDART__)
- {
- ret = __unique::unique(policy,
- first,
- last,
- result,
- binary_pred);
- }
- else
- {
-#if !__THRUST_HAS_CUDART__
- ret = thrust::unique_copy(cvt_to_seq(derived_cast(policy)),
- first,
- last,
- result,
- binary_pred);
-#endif
- }
- return ret;
-}
-
-template
-OutputIt __host__ __device__
-unique_copy(execution_policy &policy,
- InputIt first,
- InputIt last,
- OutputIt result)
-{
- typedef typename iterator_traits::value_type input_type;
- return cuda_cub::unique_copy(policy, first, last, result, equal_to());
-}
-
-
-
-__thrust_exec_check_disable__
-template
-InputIt __host__ __device__
-unique(execution_policy &policy,
- InputIt first,
- InputIt last,
- BinaryPred binary_pred)
-{
- InputIt ret = first;
- if (__THRUST_HAS_CUDART__)
- {
- ret = cuda_cub::unique_copy(policy, first, last, first, binary_pred);
- }
- else
- {
-#if !__THRUST_HAS_CUDART__
- ret = thrust::unique(cvt_to_seq(derived_cast(policy)),
- first,
- last,
- binary_pred);
-#endif
- }
- return ret;
-}
-
-template
-InputIt __host__ __device__
-unique(execution_policy &policy,
- InputIt first,
- InputIt last)
-{
- typedef typename iterator_traits::value_type input_type;
- return cuda_cub::unique(policy, first, last, equal_to());
-}
-
-} // namespace cuda_cub
-} // end namespace thrust
-
-//
-#include
-#include
-#endif
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce_by_key.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce_by_key.h
deleted file mode 100644
index aaa5959a427f8b098085722d3821aa92d180ad97..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/reduce_by_key.h
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- thrust::pair
- reduce_by_key(thrust::execution_policy &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_output,
- OutputIterator2 values_output);
-
-template
-__host__ __device__
- thrust::pair
- reduce_by_key(thrust::execution_policy &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_output,
- OutputIterator2 values_output,
- BinaryPredicate binary_pred);
-
-template
-__host__ __device__
- thrust::pair
- reduce_by_key(thrust::execution_policy &exec,
- InputIterator1 keys_first,
- InputIterator1 keys_last,
- InputIterator2 values_first,
- OutputIterator1 keys_output,
- OutputIterator2 values_output,
- BinaryPredicate binary_pred,
- BinaryFunction binary_op);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/models/base_model.py b/spaces/manhkhanhUIT/BOPBTL/Global/models/base_model.py
deleted file mode 100644
index 4043116050e057f31099cda3ecae6ee3fa46cb2a..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/BOPBTL/Global/models/base_model.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import os
-import torch
-import sys
-
-
-class BaseModel(torch.nn.Module):
- def name(self):
- return "BaseModel"
-
- def initialize(self, opt):
- self.opt = opt
- self.gpu_ids = opt.gpu_ids
- self.isTrain = opt.isTrain
- self.Tensor = torch.cuda.FloatTensor if self.gpu_ids else torch.Tensor
- self.save_dir = os.path.join(opt.checkpoints_dir, opt.name)
-
- def set_input(self, input):
- self.input = input
-
- def forward(self):
- pass
-
- # used in test time, no backprop
- def test(self):
- pass
-
- def get_image_paths(self):
- pass
-
- def optimize_parameters(self):
- pass
-
- def get_current_visuals(self):
- return self.input
-
- def get_current_errors(self):
- return {}
-
- def save(self, label):
- pass
-
- # helper saving function that can be used by subclasses
- def save_network(self, network, network_label, epoch_label, gpu_ids):
- save_filename = "%s_net_%s.pth" % (epoch_label, network_label)
- save_path = os.path.join(self.save_dir, save_filename)
- torch.save(network.cpu().state_dict(), save_path)
- if len(gpu_ids) and torch.cuda.is_available():
- network.cuda()
-
- def save_optimizer(self, optimizer, optimizer_label, epoch_label):
- save_filename = "%s_optimizer_%s.pth" % (epoch_label, optimizer_label)
- save_path = os.path.join(self.save_dir, save_filename)
- torch.save(optimizer.state_dict(), save_path)
-
- def load_optimizer(self, optimizer, optimizer_label, epoch_label, save_dir=""):
- save_filename = "%s_optimizer_%s.pth" % (epoch_label, optimizer_label)
- if not save_dir:
- save_dir = self.save_dir
- save_path = os.path.join(save_dir, save_filename)
-
- if not os.path.isfile(save_path):
- print("%s not exists yet!" % save_path)
- else:
- optimizer.load_state_dict(torch.load(save_path))
-
- # helper loading function that can be used by subclasses
- def load_network(self, network, network_label, epoch_label, save_dir=""):
- save_filename = "%s_net_%s.pth" % (epoch_label, network_label)
- if not save_dir:
- save_dir = self.save_dir
-
- # print(save_dir)
- # print(self.save_dir)
- save_path = os.path.join(save_dir, save_filename)
- if not os.path.isfile(save_path):
- print("%s not exists yet!" % save_path)
- # if network_label == 'G':
- # raise('Generator must exist!')
- else:
- # network.load_state_dict(torch.load(save_path))
- try:
- # print(save_path)
- network.load_state_dict(torch.load(save_path))
- except:
- pretrained_dict = torch.load(save_path)
- model_dict = network.state_dict()
- try:
- pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
- network.load_state_dict(pretrained_dict)
- # if self.opt.verbose:
- print(
- "Pretrained network %s has excessive layers; Only loading layers that are used"
- % network_label
- )
- except:
- print(
- "Pretrained network %s has fewer layers; The following are not initialized:"
- % network_label
- )
- for k, v in pretrained_dict.items():
- if v.size() == model_dict[k].size():
- model_dict[k] = v
-
- if sys.version_info >= (3, 0):
- not_initialized = set()
- else:
- from sets import Set
-
- not_initialized = Set()
-
- for k, v in model_dict.items():
- if k not in pretrained_dict or v.size() != pretrained_dict[k].size():
- not_initialized.add(k.split(".")[0])
-
- print(sorted(not_initialized))
- network.load_state_dict(model_dict)
-
- def update_learning_rate():
- pass
diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/architecture.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/architecture.py
deleted file mode 100644
index 91eb91c8c9fd6500d191456bb3dd8b39d491bb5a..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/architecture.py
+++ /dev/null
@@ -1,173 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-import torch.nn.utils.spectral_norm as spectral_norm
-from models.networks.normalization import SPADE
-
-
-# ResNet block that uses SPADE.
-# It differs from the ResNet block of pix2pixHD in that
-# it takes in the segmentation map as input, learns the skip connection if necessary,
-# and applies normalization first and then convolution.
-# This architecture seemed like a standard architecture for unconditional or
-# class-conditional GAN architecture using residual block.
-# The code was inspired from https://github.com/LMescheder/GAN_stability.
-class SPADEResnetBlock(nn.Module):
- def __init__(self, fin, fout, opt):
- super().__init__()
- # Attributes
- self.learned_shortcut = fin != fout
- fmiddle = min(fin, fout)
-
- self.opt = opt
- # create conv layers
- self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1)
- self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1)
- if self.learned_shortcut:
- self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)
-
- # apply spectral norm if specified
- if "spectral" in opt.norm_G:
- self.conv_0 = spectral_norm(self.conv_0)
- self.conv_1 = spectral_norm(self.conv_1)
- if self.learned_shortcut:
- self.conv_s = spectral_norm(self.conv_s)
-
- # define normalization layers
- spade_config_str = opt.norm_G.replace("spectral", "")
- self.norm_0 = SPADE(spade_config_str, fin, opt.semantic_nc, opt)
- self.norm_1 = SPADE(spade_config_str, fmiddle, opt.semantic_nc, opt)
- if self.learned_shortcut:
- self.norm_s = SPADE(spade_config_str, fin, opt.semantic_nc, opt)
-
- # note the resnet block with SPADE also takes in |seg|,
- # the semantic segmentation map as input
- def forward(self, x, seg, degraded_image):
- x_s = self.shortcut(x, seg, degraded_image)
-
- dx = self.conv_0(self.actvn(self.norm_0(x, seg, degraded_image)))
- dx = self.conv_1(self.actvn(self.norm_1(dx, seg, degraded_image)))
-
- out = x_s + dx
-
- return out
-
- def shortcut(self, x, seg, degraded_image):
- if self.learned_shortcut:
- x_s = self.conv_s(self.norm_s(x, seg, degraded_image))
- else:
- x_s = x
- return x_s
-
- def actvn(self, x):
- return F.leaky_relu(x, 2e-1)
-
-
-# ResNet block used in pix2pixHD
-# We keep the same architecture as pix2pixHD.
-class ResnetBlock(nn.Module):
- def __init__(self, dim, norm_layer, activation=nn.ReLU(False), kernel_size=3):
- super().__init__()
-
- pw = (kernel_size - 1) // 2
- self.conv_block = nn.Sequential(
- nn.ReflectionPad2d(pw),
- norm_layer(nn.Conv2d(dim, dim, kernel_size=kernel_size)),
- activation,
- nn.ReflectionPad2d(pw),
- norm_layer(nn.Conv2d(dim, dim, kernel_size=kernel_size)),
- )
-
- def forward(self, x):
- y = self.conv_block(x)
- out = x + y
- return out
-
-
-# VGG architecter, used for the perceptual loss using a pretrained VGG network
-class VGG19(torch.nn.Module):
- def __init__(self, requires_grad=False):
- super().__init__()
- vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- for x in range(2):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(2, 7):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(7, 12):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(12, 21):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(21, 30):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, X):
- h_relu1 = self.slice1(X)
- h_relu2 = self.slice2(h_relu1)
- h_relu3 = self.slice3(h_relu2)
- h_relu4 = self.slice4(h_relu3)
- h_relu5 = self.slice5(h_relu4)
- out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]
- return out
-
-
-class SPADEResnetBlock_non_spade(nn.Module):
- def __init__(self, fin, fout, opt):
- super().__init__()
- # Attributes
- self.learned_shortcut = fin != fout
- fmiddle = min(fin, fout)
-
- self.opt = opt
- # create conv layers
- self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1)
- self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1)
- if self.learned_shortcut:
- self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)
-
- # apply spectral norm if specified
- if "spectral" in opt.norm_G:
- self.conv_0 = spectral_norm(self.conv_0)
- self.conv_1 = spectral_norm(self.conv_1)
- if self.learned_shortcut:
- self.conv_s = spectral_norm(self.conv_s)
-
- # define normalization layers
- spade_config_str = opt.norm_G.replace("spectral", "")
- self.norm_0 = SPADE(spade_config_str, fin, opt.semantic_nc, opt)
- self.norm_1 = SPADE(spade_config_str, fmiddle, opt.semantic_nc, opt)
- if self.learned_shortcut:
- self.norm_s = SPADE(spade_config_str, fin, opt.semantic_nc, opt)
-
- # note the resnet block with SPADE also takes in |seg|,
- # the semantic segmentation map as input
- def forward(self, x, seg, degraded_image):
- x_s = self.shortcut(x, seg, degraded_image)
-
- dx = self.conv_0(self.actvn(x))
- dx = self.conv_1(self.actvn(dx))
-
- out = x_s + dx
-
- return out
-
- def shortcut(self, x, seg, degraded_image):
- if self.learned_shortcut:
- x_s = self.conv_s(x)
- else:
- x_s = x
- return x_s
-
- def actvn(self, x):
- return F.leaky_relu(x, 2e-1)
diff --git a/spaces/matthoffner/chatbot-mini/types/data.ts b/spaces/matthoffner/chatbot-mini/types/data.ts
deleted file mode 100644
index d57323721fbbf2ead31fcc33334717d75de1f3f6..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot-mini/types/data.ts
+++ /dev/null
@@ -1,4 +0,0 @@
-export interface KeyValuePair {
- key: string;
- value: any;
-}
diff --git a/spaces/maxmax20160403/sovits5.0/vits_pretrain/README.md b/spaces/maxmax20160403/sovits5.0/vits_pretrain/README.md
deleted file mode 100644
index 5e13d08f9a9c3fbc1d21e06e760abb0dce647ef0..0000000000000000000000000000000000000000
--- a/spaces/maxmax20160403/sovits5.0/vits_pretrain/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Path for:
-
- vits_pretrain.pt
\ No newline at end of file
diff --git a/spaces/merve/my_own_oasst_falcon/Dockerfile b/spaces/merve/my_own_oasst_falcon/Dockerfile
deleted file mode 100644
index 1f185cc85fa318fdf39f91be98db2bb7e805411c..0000000000000000000000000000000000000000
--- a/spaces/merve/my_own_oasst_falcon/Dockerfile
+++ /dev/null
@@ -1,121 +0,0 @@
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-
-FROM node:19 as chatui-builder
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-WORKDIR /app
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- git gettext && \
- rm -rf /var/lib/apt/lists/*
-
-
-RUN git clone https://github.com/huggingface/chat-ui.git
-
-WORKDIR /app/chat-ui
-
-
-COPY .env.local.template .env.local.template
-
-RUN mkdir defaults
-ADD defaults /defaults
-RUN chmod -R 777 /defaults
-RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \
- MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \
- && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \
- && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \
- && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \
- && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \
- echo "${MONGODB_URL}" && \
- envsubst < ".env.local.template" > ".env.local" \
- && rm .env.local.template
-
-
-
-RUN --mount=type=cache,target=/app/.npm \
- npm set cache /app/.npm && \
- npm ci
-
-RUN npm run build
-
-FROM ghcr.io/huggingface/text-generation-inference:latest
-
-ARG MODEL_NAME
-ARG MODEL_PARAMS
-ARG APP_COLOR
-ARG APP_NAME
-
-ENV TZ=Europe/Paris \
- PORT=3000
-
-
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- gnupg \
- curl \
- gettext && \
- rm -rf /var/lib/apt/lists/*
-COPY entrypoint.sh.template entrypoint.sh.template
-
-RUN mkdir defaults
-ADD defaults /defaults
-RUN chmod -R 777 /defaults
-
-RUN --mount=type=secret,id=MONGODB_URL,mode=0444 \
- MODEL_NAME="${MODEL_NAME:="$(cat /defaults/MODEL_NAME)"}" && export MODEL_NAME \
- && MODEL_PARAMS="${MODEL_PARAMS:="$(cat /defaults/MODEL_PARAMS)"}" && export MODEL_PARAMS \
- && APP_COLOR="${APP_COLOR:="$(cat /defaults/APP_COLOR)"}" && export APP_COLOR \
- && APP_NAME="${APP_NAME:="$(cat /defaults/APP_NAME)"}" && export APP_NAME \
- && MONGODB_URL=$(cat /run/secrets/MONGODB_URL > /dev/null | grep '^' || cat /defaults/MONGODB_URL) && export MONGODB_URL && \
- envsubst < "entrypoint.sh.template" > "entrypoint.sh" \
- && rm entrypoint.sh.template
-
-
-RUN curl -fsSL https://pgp.mongodb.com/server-6.0.asc | \
- gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg \
- --dearmor
-
-RUN echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/6.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-6.0.list
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- mongodb-org && \
- rm -rf /var/lib/apt/lists/*
-
-RUN mkdir -p /data/db
-RUN chown -R 1000:1000 /data
-
-RUN curl -fsSL https://deb.nodesource.com/setup_19.x | /bin/bash -
-
-RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
- nodejs && \
- rm -rf /var/lib/apt/lists/*
-
-RUN mkdir /app
-RUN chown -R 1000:1000 /app
-
-RUN useradd -m -u 1000 user
-
-# Switch to the "user" user
-USER user
-
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-RUN npm config set prefix /home/user/.local
-RUN npm install -g pm2
-
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/node_modules /app/node_modules
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/package.json /app/package.json
-COPY --from=chatui-builder --chown=1000 /app/chat-ui/build /app/build
-
-ENTRYPOINT ["/bin/bash"]
-CMD ["entrypoint.sh"]
-
-
diff --git a/spaces/misza222/extractframe/app.py b/spaces/misza222/extractframe/app.py
deleted file mode 100644
index 7b9dd07a4c05c49ae6d6a5feae7f272c9a268700..0000000000000000000000000000000000000000
--- a/spaces/misza222/extractframe/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import gradio as gr
-
-import cv2
-import PIL
-
-def get_frame_id_from_pct(v, pct):
- return int(v.get(cv2.CAP_PROP_FRAME_COUNT) * pct/100)
-
-def frame2pil(frame):
- return PIL.Image.fromarray(frame, mode="RGB")
-
-def extract_frame(pct, video):
- v = cv2.VideoCapture(video)
- frame_at = get_frame_id_from_pct(v, pct)
- v.set(cv2.CAP_PROP_POS_FRAMES, frame_at)
- ret, frame = v.read()
- return frame2pil(frame)
-
-def process_video(pct, video):
-
- return extract_frame(pct, video)
-
-app = gr.Interface(fn=process_video,
- inputs=[gr.Number(label="PCT", ) ,gr.PlayableVideo()]
- , outputs='image')
-app.launch()
\ No newline at end of file
diff --git a/spaces/moflo/keras_stylegan/app.py b/spaces/moflo/keras_stylegan/app.py
deleted file mode 100644
index 31dcb0b8b1d43d6eff8768fc4c9872f051957bce..0000000000000000000000000000000000000000
--- a/spaces/moflo/keras_stylegan/app.py
+++ /dev/null
@@ -1,621 +0,0 @@
-import sys
-from subprocess import call
-def run_cmd(command):
- try:
- print(command)
- call(command, shell=True)
- except KeyboardInterrupt:
- print("Process interrupted")
- sys.exit(1)
-
-print("⬇️ Installing latest gradio==2.4.7b9")
-run_cmd("pip install --upgrade pip")
-run_cmd('pip install gradio==2.4.7b9')
-
-import gradio as gr
-import os
-import random
-import math
-import numpy as np
-import matplotlib.pyplot as plt
-
-from enum import Enum
-from glob import glob
-from functools import partial
-
-import tensorflow as tf
-from tensorflow import keras
-from tensorflow.keras import layers
-from tensorflow.keras.models import Sequential
-from tensorflow_addons.layers import InstanceNormalization
-
-import tensorflow_datasets as tfds
-
-# Model Definition
-
-def log2(x):
- return int(np.log2(x))
-
-
-def resize_image(res, sample):
- print("Call resize_image...")
- image = sample["image"]
- # only donwsampling, so use nearest neighbor that is faster to run
- image = tf.image.resize(
- image, (res, res), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
- )
- image = tf.cast(image, tf.float32) / 127.5 - 1.0
- return image
-
-
-def create_dataloader(res):
- batch_size = batch_sizes[log2(res)]
- dl = ds_train.map(partial(resize_image, res), num_parallel_calls=tf.data.AUTOTUNE)
- dl = dl.shuffle(200).batch(batch_size, drop_remainder=True).prefetch(1).repeat()
- return dl
-
-def fade_in(alpha, a, b):
- return alpha * a + (1.0 - alpha) * b
-
-
-def wasserstein_loss(y_true, y_pred):
- return -tf.reduce_mean(y_true * y_pred)
-
-
-def pixel_norm(x, epsilon=1e-8):
- return x / tf.math.sqrt(tf.reduce_mean(x ** 2, axis=-1, keepdims=True) + epsilon)
-
-
-def minibatch_std(input_tensor, epsilon=1e-8):
- n, h, w, c = tf.shape(input_tensor)
- group_size = tf.minimum(4, n)
- x = tf.reshape(input_tensor, [group_size, -1, h, w, c])
- group_mean, group_var = tf.nn.moments(x, axes=(0), keepdims=False)
- group_std = tf.sqrt(group_var + epsilon)
- avg_std = tf.reduce_mean(group_std, axis=[1, 2, 3], keepdims=True)
- x = tf.tile(avg_std, [group_size, h, w, 1])
- return tf.concat([input_tensor, x], axis=-1)
-
-
-class EqualizedConv(layers.Layer):
- def __init__(self, out_channels, kernel=3, gain=2, **kwargs):
- super(EqualizedConv, self).__init__(**kwargs)
- self.kernel = kernel
- self.out_channels = out_channels
- self.gain = gain
- self.pad = kernel != 1
-
- def build(self, input_shape):
- self.in_channels = input_shape[-1]
- initializer = keras.initializers.RandomNormal(mean=0.0, stddev=1.0)
- self.w = self.add_weight(
- shape=[self.kernel, self.kernel, self.in_channels, self.out_channels],
- initializer=initializer,
- trainable=True,
- name="kernel",
- )
- self.b = self.add_weight(
- shape=(self.out_channels,), initializer="zeros", trainable=True, name="bias"
- )
- fan_in = self.kernel * self.kernel * self.in_channels
- self.scale = tf.sqrt(self.gain / fan_in)
-
- def call(self, inputs):
- if self.pad:
- x = tf.pad(inputs, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="REFLECT")
- else:
- x = inputs
- output = (
- tf.nn.conv2d(x, self.scale * self.w, strides=1, padding="VALID") + self.b
- )
- return output
-
-
-class EqualizedDense(layers.Layer):
- def __init__(self, units, gain=2, learning_rate_multiplier=1, **kwargs):
- super(EqualizedDense, self).__init__(**kwargs)
- self.units = units
- self.gain = gain
- self.learning_rate_multiplier = learning_rate_multiplier
-
- def build(self, input_shape):
- self.in_channels = input_shape[-1]
- initializer = keras.initializers.RandomNormal(
- mean=0.0, stddev=1.0 / self.learning_rate_multiplier
- )
- self.w = self.add_weight(
- shape=[self.in_channels, self.units],
- initializer=initializer,
- trainable=True,
- name="kernel",
- )
- self.b = self.add_weight(
- shape=(self.units,), initializer="zeros", trainable=True, name="bias"
- )
- fan_in = self.in_channels
- self.scale = tf.sqrt(self.gain / fan_in)
-
- def call(self, inputs):
- output = tf.add(tf.matmul(inputs, self.scale * self.w), self.b)
- return output * self.learning_rate_multiplier
-
-
-class AddNoise(layers.Layer):
- def build(self, input_shape):
- n, h, w, c = input_shape[0]
- initializer = keras.initializers.RandomNormal(mean=0.0, stddev=1.0)
- self.b = self.add_weight(
- shape=[1, 1, 1, c], initializer=initializer, trainable=True, name="kernel"
- )
-
- def call(self, inputs):
- x, noise = inputs
- output = x + self.b * noise
- return output
-
-
-class AdaIN(layers.Layer):
- def __init__(self, gain=1, **kwargs):
- super(AdaIN, self).__init__(**kwargs)
- self.gain = gain
-
- def build(self, input_shapes):
- x_shape = input_shapes[0]
- w_shape = input_shapes[1]
-
- self.w_channels = w_shape[-1]
- self.x_channels = x_shape[-1]
-
- self.dense_1 = EqualizedDense(self.x_channels, gain=1)
- self.dense_2 = EqualizedDense(self.x_channels, gain=1)
-
- def call(self, inputs):
- x, w = inputs
- ys = tf.reshape(self.dense_1(w), (-1, 1, 1, self.x_channels))
- yb = tf.reshape(self.dense_2(w), (-1, 1, 1, self.x_channels))
- return ys * x + yb
-
-def Mapping(num_stages, input_shape=512):
- z = layers.Input(shape=(input_shape))
- w = pixel_norm(z)
- for i in range(8):
- w = EqualizedDense(512, learning_rate_multiplier=0.01)(w)
- w = layers.LeakyReLU(0.2)(w)
- w = tf.tile(tf.expand_dims(w, 1), (1, num_stages, 1))
- return keras.Model(z, w, name="mapping")
-
-
-class Generator:
- def __init__(self, start_res_log2, target_res_log2):
- self.start_res_log2 = start_res_log2
- self.target_res_log2 = target_res_log2
- self.num_stages = target_res_log2 - start_res_log2 + 1
- # list of generator blocks at increasing resolution
- self.g_blocks = []
- # list of layers to convert g_block activation to RGB
- self.to_rgb = []
- # list of noise input of different resolutions into g_blocks
- self.noise_inputs = []
- # filter size to use at each stage, keys are log2(resolution)
- self.filter_nums = {
- 0: 512,
- 1: 512,
- 2: 512, # 4x4
- 3: 512, # 8x8
- 4: 512, # 16x16
- 5: 512, # 32x32
- 6: 256, # 64x64
- 7: 128, # 128x128
- 8: 64, # 256x256
- 9: 32, # 512x512
- 10: 16,
- } # 1024x1024
-
- start_res = 2 ** start_res_log2
- self.input_shape = (start_res, start_res, self.filter_nums[start_res_log2])
- self.g_input = layers.Input(self.input_shape, name="generator_input")
-
- for i in range(start_res_log2, target_res_log2 + 1):
- filter_num = self.filter_nums[i]
- res = 2 ** i
- self.noise_inputs.append(
- layers.Input(shape=(res, res, 1), name=f"noise_{res}x{res}")
- )
- to_rgb = Sequential(
- [
- layers.InputLayer(input_shape=(res, res, filter_num)),
- EqualizedConv(3, 1, gain=1),
- ],
- name=f"to_rgb_{res}x{res}",
- )
- self.to_rgb.append(to_rgb)
- is_base = i == self.start_res_log2
- if is_base:
- input_shape = (res, res, self.filter_nums[i - 1])
- else:
- input_shape = (2 ** (i - 1), 2 ** (i - 1), self.filter_nums[i - 1])
- g_block = self.build_block(
- filter_num, res=res, input_shape=input_shape, is_base=is_base
- )
- self.g_blocks.append(g_block)
-
- def build_block(self, filter_num, res, input_shape, is_base):
- input_tensor = layers.Input(shape=input_shape, name=f"g_{res}")
- noise = layers.Input(shape=(res, res, 1), name=f"noise_{res}")
- w = layers.Input(shape=512)
- x = input_tensor
-
- if not is_base:
- x = layers.UpSampling2D((2, 2))(x)
- x = EqualizedConv(filter_num, 3)(x)
-
- x = AddNoise()([x, noise])
- x = layers.LeakyReLU(0.2)(x)
- x = InstanceNormalization()(x)
- x = AdaIN()([x, w])
-
- x = EqualizedConv(filter_num, 3)(x)
- x = AddNoise()([x, noise])
- x = layers.LeakyReLU(0.2)(x)
- x = InstanceNormalization()(x)
- x = AdaIN()([x, w])
- return keras.Model([input_tensor, w, noise], x, name=f"genblock_{res}x{res}")
-
- def grow(self, res_log2):
- res = 2 ** res_log2
-
- num_stages = res_log2 - self.start_res_log2 + 1
- w = layers.Input(shape=(self.num_stages, 512), name="w")
-
- alpha = layers.Input(shape=(1), name="g_alpha")
- x = self.g_blocks[0]([self.g_input, w[:, 0], self.noise_inputs[0]])
-
- if num_stages == 1:
- rgb = self.to_rgb[0](x)
- else:
- for i in range(1, num_stages - 1):
-
- x = self.g_blocks[i]([x, w[:, i], self.noise_inputs[i]])
-
- old_rgb = self.to_rgb[num_stages - 2](x)
- old_rgb = layers.UpSampling2D((2, 2))(old_rgb)
-
- i = num_stages - 1
- x = self.g_blocks[i]([x, w[:, i], self.noise_inputs[i]])
-
- new_rgb = self.to_rgb[i](x)
-
- rgb = fade_in(alpha[0], new_rgb, old_rgb)
-
- return keras.Model(
- [self.g_input, w, self.noise_inputs, alpha],
- rgb,
- name=f"generator_{res}_x_{res}",
- )
-
-
-class Discriminator:
- def __init__(self, start_res_log2, target_res_log2):
- self.start_res_log2 = start_res_log2
- self.target_res_log2 = target_res_log2
- self.num_stages = target_res_log2 - start_res_log2 + 1
- # filter size to use at each stage, keys are log2(resolution)
- self.filter_nums = {
- 0: 512,
- 1: 512,
- 2: 512, # 4x4
- 3: 512, # 8x8
- 4: 512, # 16x16
- 5: 512, # 32x32
- 6: 256, # 64x64
- 7: 128, # 128x128
- 8: 64, # 256x256
- 9: 32, # 512x512
- 10: 16,
- } # 1024x1024
- # list of discriminator blocks at increasing resolution
- self.d_blocks = []
- # list of layers to convert RGB into activation for d_blocks inputs
- self.from_rgb = []
-
- for res_log2 in range(self.start_res_log2, self.target_res_log2 + 1):
- res = 2 ** res_log2
- filter_num = self.filter_nums[res_log2]
- from_rgb = Sequential(
- [
- layers.InputLayer(
- input_shape=(res, res, 3), name=f"from_rgb_input_{res}"
- ),
- EqualizedConv(filter_num, 1),
- layers.LeakyReLU(0.2),
- ],
- name=f"from_rgb_{res}",
- )
-
- self.from_rgb.append(from_rgb)
-
- input_shape = (res, res, filter_num)
- if len(self.d_blocks) == 0:
- d_block = self.build_base(filter_num, res)
- else:
- d_block = self.build_block(
- filter_num, self.filter_nums[res_log2 - 1], res
- )
-
- self.d_blocks.append(d_block)
-
- def build_base(self, filter_num, res):
- input_tensor = layers.Input(shape=(res, res, filter_num), name=f"d_{res}")
- x = minibatch_std(input_tensor)
- x = EqualizedConv(filter_num, 3)(x)
- x = layers.LeakyReLU(0.2)(x)
- x = layers.Flatten()(x)
- x = EqualizedDense(filter_num)(x)
- x = layers.LeakyReLU(0.2)(x)
- x = EqualizedDense(1)(x)
- return keras.Model(input_tensor, x, name=f"d_{res}")
-
- def build_block(self, filter_num_1, filter_num_2, res):
- input_tensor = layers.Input(shape=(res, res, filter_num_1), name=f"d_{res}")
- x = EqualizedConv(filter_num_1, 3)(input_tensor)
- x = layers.LeakyReLU(0.2)(x)
- x = EqualizedConv(filter_num_2)(x)
- x = layers.LeakyReLU(0.2)(x)
- x = layers.AveragePooling2D((2, 2))(x)
- return keras.Model(input_tensor, x, name=f"d_{res}")
-
- def grow(self, res_log2):
- res = 2 ** res_log2
- idx = res_log2 - self.start_res_log2
- alpha = layers.Input(shape=(1), name="d_alpha")
- input_image = layers.Input(shape=(res, res, 3), name="input_image")
- x = self.from_rgb[idx](input_image)
- x = self.d_blocks[idx](x)
- if idx > 0:
- idx -= 1
- downsized_image = layers.AveragePooling2D((2, 2))(input_image)
- y = self.from_rgb[idx](downsized_image)
- x = fade_in(alpha[0], x, y)
-
- for i in range(idx, -1, -1):
- x = self.d_blocks[i](x)
- return keras.Model([input_image, alpha], x, name=f"discriminator_{res}_x_{res}")
-
-class StyleGAN(tf.keras.Model):
- def __init__(self, z_dim=512, target_res=64, start_res=4):
- super(StyleGAN, self).__init__()
- self.z_dim = z_dim
-
- self.target_res_log2 = log2(target_res)
- self.start_res_log2 = log2(start_res)
- self.current_res_log2 = self.target_res_log2
- self.num_stages = self.target_res_log2 - self.start_res_log2 + 1
-
- self.alpha = tf.Variable(1.0, dtype=tf.float32, trainable=False, name="alpha")
-
- self.mapping = Mapping(num_stages=self.num_stages)
- self.d_builder = Discriminator(self.start_res_log2, self.target_res_log2)
- self.g_builder = Generator(self.start_res_log2, self.target_res_log2)
- self.g_input_shape = self.g_builder.input_shape
-
- self.phase = None
- self.train_step_counter = tf.Variable(0, dtype=tf.int32, trainable=False)
-
- self.loss_weights = {"gradient_penalty": 10, "drift": 0.001}
-
- def grow_model(self, res):
- tf.keras.backend.clear_session()
- res_log2 = log2(res)
- self.generator = self.g_builder.grow(res_log2)
- self.discriminator = self.d_builder.grow(res_log2)
- self.current_res_log2 = res_log2
- print(f"\nModel resolution:{res}x{res}")
-
- def compile(
- self, steps_per_epoch, phase, res, d_optimizer, g_optimizer, *args, **kwargs
- ):
- self.loss_weights = kwargs.pop("loss_weights", self.loss_weights)
- self.steps_per_epoch = steps_per_epoch
- if res != 2 ** self.current_res_log2:
- self.grow_model(res)
- self.d_optimizer = d_optimizer
- self.g_optimizer = g_optimizer
-
- self.train_step_counter.assign(0)
- self.phase = phase
- self.d_loss_metric = keras.metrics.Mean(name="d_loss")
- self.g_loss_metric = keras.metrics.Mean(name="g_loss")
- super(StyleGAN, self).compile(*args, **kwargs)
-
- @property
- def metrics(self):
- return [self.d_loss_metric, self.g_loss_metric]
-
- def generate_noise(self, batch_size):
- noise = [
- tf.random.normal((batch_size, 2 ** res, 2 ** res, 1))
- for res in range(self.start_res_log2, self.target_res_log2 + 1)
- ]
- return noise
-
- def gradient_loss(self, grad):
- loss = tf.square(grad)
- loss = tf.reduce_sum(loss, axis=tf.range(1, tf.size(tf.shape(loss))))
- loss = tf.sqrt(loss)
- loss = tf.reduce_mean(tf.square(loss - 1))
- return loss
-
- def train_step(self, real_images):
-
- self.train_step_counter.assign_add(1)
-
- if self.phase == "TRANSITION":
- self.alpha.assign(
- tf.cast(self.train_step_counter / self.steps_per_epoch, tf.float32)
- )
- elif self.phase == "STABLE":
- self.alpha.assign(1.0)
- else:
- raise NotImplementedError
- alpha = tf.expand_dims(self.alpha, 0)
- batch_size = tf.shape(real_images)[0]
- real_labels = tf.ones(batch_size)
- fake_labels = -tf.ones(batch_size)
-
- z = tf.random.normal((batch_size, self.z_dim))
- const_input = tf.ones(tuple([batch_size] + list(self.g_input_shape)))
- noise = self.generate_noise(batch_size)
-
- # generator
- with tf.GradientTape() as g_tape:
- w = self.mapping(z)
- fake_images = self.generator([const_input, w, noise, alpha])
- pred_fake = self.discriminator([fake_images, alpha])
- g_loss = wasserstein_loss(real_labels, pred_fake)
-
- trainable_weights = (
- self.mapping.trainable_weights + self.generator.trainable_weights
- )
- gradients = g_tape.gradient(g_loss, trainable_weights)
- self.g_optimizer.apply_gradients(zip(gradients, trainable_weights))
-
- # discriminator
- with tf.GradientTape() as gradient_tape, tf.GradientTape() as total_tape:
- # forward pass
- pred_fake = self.discriminator([fake_images, alpha])
- pred_real = self.discriminator([real_images, alpha])
-
- epsilon = tf.random.uniform((batch_size, 1, 1, 1))
- interpolates = epsilon * real_images + (1 - epsilon) * fake_images
- gradient_tape.watch(interpolates)
- pred_fake_grad = self.discriminator([interpolates, alpha])
-
- # calculate losses
- loss_fake = wasserstein_loss(fake_labels, pred_fake)
- loss_real = wasserstein_loss(real_labels, pred_real)
- loss_fake_grad = wasserstein_loss(fake_labels, pred_fake_grad)
-
- # gradient penalty
- gradients_fake = gradient_tape.gradient(loss_fake_grad, [interpolates])
- gradient_penalty = self.loss_weights[
- "gradient_penalty"
- ] * self.gradient_loss(gradients_fake)
-
- # drift loss
- all_pred = tf.concat([pred_fake, pred_real], axis=0)
- drift_loss = self.loss_weights["drift"] * tf.reduce_mean(all_pred ** 2)
-
- d_loss = loss_fake + loss_real + gradient_penalty + drift_loss
-
- gradients = total_tape.gradient(
- d_loss, self.discriminator.trainable_weights
- )
- self.d_optimizer.apply_gradients(
- zip(gradients, self.discriminator.trainable_weights)
- )
-
- # Update metrics
- self.d_loss_metric.update_state(d_loss)
- self.g_loss_metric.update_state(g_loss)
- return {
- "d_loss": self.d_loss_metric.result(),
- "g_loss": self.g_loss_metric.result(),
- }
-
- def call(self, inputs: dict()):
- style_code = inputs.get("style_code", None)
- z = inputs.get("z", None)
- noise = inputs.get("noise", None)
- batch_size = inputs.get("batch_size", 1)
- alpha = inputs.get("alpha", 1.0)
- alpha = tf.expand_dims(alpha, 0)
- if style_code is None:
- if z is None:
- z = tf.random.normal((batch_size, self.z_dim))
- style_code = self.mapping(z)
-
- if noise is None:
- noise = self.generate_noise(batch_size)
-
- # self.alpha.assign(alpha)
-
- const_input = tf.ones(tuple([batch_size] + list(self.g_input_shape)))
- images = self.generator([const_input, style_code, noise, alpha])
- images = np.clip((images * 0.5 + 0.5) * 255, 0, 255).astype(np.uint8)
-
- return images
-
-# Set up GAN
-
-batch_sizes = {2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7: 8, 8: 4, 9: 2, 10: 1}
-train_step_ratio = {k: batch_sizes[2] / v for k, v in batch_sizes.items()}
-
-START_RES = 4
-TARGET_RES = 128
-
-# style_gan = StyleGAN(start_res=START_RES, target_res=TARGET_RES)
-
-print("Loading...")
-
-url = "https://github.com/soon-yau/stylegan_keras/releases/download/keras_example_v1.0/stylegan_128x128.ckpt.zip"
-
-weights_path = keras.utils.get_file(
- "stylegan_128x128.ckpt.zip",
- url,
- extract=True,
- cache_dir=os.path.abspath("."),
- cache_subdir="pretrained",
-)
-
-# style_gan.grow_model(128)
-# style_gan.load_weights(os.path.join("pretrained/stylegan_128x128.ckpt"))
-
-# tf.random.set_seed(196)
-# batch_size = 2
-# z = tf.random.normal((batch_size, style_gan.z_dim))
-# w = style_gan.mapping(z)
-# noise = style_gan.generate_noise(batch_size=batch_size)
-# images = style_gan({"style_code": w, "noise": noise, "alpha": 1.0})
-
-# plot_images(images, 5)
-
-class InferenceWrapper:
- def __init__(self, model):
- self.model = model
- self.style_gan = StyleGAN(start_res=START_RES, target_res=TARGET_RES)
- self.style_gan.grow_model(128)
- self.style_gan.load_weights(os.path.join("pretrained/stylegan_128x128.ckpt"))
- self.seed = -1
-
- def __call__(self, seed, feature):
- if seed != self.seed:
- print(f"Loading model: {self.model}")
- tf.random.set_seed(seed)
- batch_size = 1
- self.z = tf.random.normal((batch_size, self.style_gan.z_dim))
- self.w = self.style_gan.mapping(self.z)
- self.noise = self.style_gan.generate_noise(batch_size=batch_size)
- else:
- print(f"Model '{self.model}' already loaded, reusing it.")
- return self.style_gan({"style_code": self.w, "noise": self.noise, "alpha": 1.0})[0]
-
-
-wrapper = InferenceWrapper('celeba')
-
-def fn(seed, feature):
- return wrapper(seed, feature)
-
-gr.Interface(
- fn,
- inputs=[
- gr.inputs.Slider(minimum=0, maximum=999999999, step=1, default=0, label='Random Seed'),
- gr.inputs.Radio(list({"test1","test2"}), type="value", default='test1', label='Feature Type')
- ],
- outputs='image',
- examples=[[343, 'test1'], [456, 'test2']],
- enable_queue=True,
- title="Keras StyleGAN Generator",
- description="Select random seed and selct Submit to generate a new image",
- article="Face image generation with StyleGAN using tf.keras. The code is from the Keras.io exmple by Soon-Yau Cheong
",
- css=".panel { padding: 5px } .moflo-link { color: #999 }"
-).launch()
\ No newline at end of file
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py b/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py
deleted file mode 100644
index 7e2caa03400129ac0bb34ae35274cdf46f27a055..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-from fairseq.modules import TransformerEncoderLayer
-
-from .multihead_linear_attention import MultiheadLinearAttention
-
-
-class LinformerTransformerEncoderLayer(TransformerEncoderLayer):
- """
- Implements a Linformer Encoder Layer used in BERT/XLM style pre-trained
- models.
- """
-
- def __init__(self, args, shared_compress_layer):
- # wrap in a list so it's not automatically registered by PyTorch
- self.shared_compress_layer = [shared_compress_layer]
-
- super().__init__(args)
-
- self.register_buffer("version", torch.tensor(2))
-
- def build_self_attention(self, embed_dim, args):
- return MultiheadLinearAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.dropout,
- self_attention=True,
- q_noise=args.quant_noise_pq,
- qn_block_size=args.quant_noise_pq_block_size,
- compressed=args.compressed,
- max_seq_len=args.max_positions,
- shared_kv_compressed=args.shared_kv_compressed,
- shared_compress_layer=self.shared_compress_layer[0],
- freeze_compress=args.freeze_compress,
- )
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- prefix = name + "." if name != "" else ""
-
- # some old checkpoints had weight sharing implemented incorrectly
- # (note: this was correct in the original paper code)
- if utils.item(state_dict.get(f"{prefix}version", torch.tensor(1))) < 2:
- state_dict[f"{prefix}version"] = torch.tensor(1)
- # check compression layer sharing
- if f"{prefix}shared_compress_layer.weight" in state_dict:
- # reinitialize block without sharing compression layer to match
- # old behavior
- self.shared_compress_layer = [
- torch.nn.Linear(
- self.shared_compress_layer[0].weight.size(1),
- self.shared_compress_layer[0].weight.size(0),
- )
- ]
- self.self_attn = self.build_self_attention(self.embed_dim, self.args)
- # delete shared_compress_layer, since it's already copied to
- # self_attn.compress_k.weight
- del state_dict[f"{prefix}shared_compress_layer.weight"]
- if f"{prefix}shared_compress_layer.bias" in state_dict:
- del state_dict[f"{prefix}shared_compress_layer.bias"]
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/speech_to_text/convtransformer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/speech_to_text/convtransformer.py
deleted file mode 100644
index eba000d7b0826d2ecf5dc471156f8f8cc9f5e402..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/speech_to_text/convtransformer.py
+++ /dev/null
@@ -1,448 +0,0 @@
-#!/usr/bin/env python3
-
-import logging
-import math
-from typing import Dict, List, Optional, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils, utils
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.models.transformer import Embedding, TransformerDecoder
-from fairseq.modules import LayerNorm, PositionalEmbedding, TransformerEncoderLayer
-from torch import Tensor
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("convtransformer")
-class ConvTransformerModel(FairseqEncoderDecoderModel):
- """
- Transformer-based Speech translation model from ESPNet-ST
- https://arxiv.org/abs/2004.10234
- """
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--input-feat-per-channel",
- type=int,
- metavar="N",
- help="encoder input dimension per input channel",
- )
- parser.add_argument(
- "--activation-fn",
- choices=utils.get_available_activation_fns(),
- help="activation function to use",
- )
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--activation-dropout",
- "--relu-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN.",
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--decoder-output-dim",
- type=int,
- metavar="N",
- help="decoder output dimension (extra linear layer if different from decoder embed dim)",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--layernorm-embedding",
- action="store_true",
- help="add layernorm to embedding",
- )
- parser.add_argument(
- "--no-scale-embedding",
- action="store_true",
- help="if True, dont scale embeddings",
- )
- parser.add_argument(
- "--load-pretrained-encoder-from",
- type=str,
- metavar="STR",
- help="model to take encoder weights from (for initialization)",
- )
- parser.add_argument(
- "--load-pretrained-decoder-from",
- type=str,
- metavar="STR",
- help="model to take decoder weights from (for initialization)",
- )
- parser.add_argument(
- "--conv-out-channels",
- type=int,
- metavar="INT",
- help="the number of output channels of conv layer",
- )
-
- @classmethod
- def build_encoder(cls, args):
- encoder = ConvTransformerEncoder(args)
- if getattr(args, "load_pretrained_encoder_from", None):
- encoder = checkpoint_utils.load_pretrained_component_from_model(
- component=encoder, checkpoint=args.load_pretrained_encoder_from
- )
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task, embed_tokens):
- decoder = TransformerDecoderNoExtra(args, task.target_dictionary, embed_tokens)
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- def build_embedding(dictionary, embed_dim):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- return Embedding(num_embeddings, embed_dim, padding_idx)
-
- decoder_embed_tokens = build_embedding(
- task.target_dictionary, args.decoder_embed_dim
- )
- encoder = cls.build_encoder(args)
- decoder = cls.build_decoder(args, task, decoder_embed_tokens)
- return cls(encoder, decoder)
-
- @staticmethod
- @torch.jit.unused
- def set_batch_first(lprobs):
- lprobs.batch_first = True
-
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, sample)
- if self.training:
- self.set_batch_first(lprobs)
- return lprobs
-
- def output_layout(self):
- return "BTD"
-
- """
- The forward method inherited from the base class has a **kwargs argument in
- its input, which is not supported in torchscript. This method overrites the forward
- method definition without **kwargs.
- """
-
- def forward(self, src_tokens, src_lengths, prev_output_tokens):
- encoder_out = self.encoder(src_tokens=src_tokens, src_lengths=src_lengths)
- decoder_out = self.decoder(
- prev_output_tokens=prev_output_tokens, encoder_out=encoder_out
- )
- return decoder_out
-
-
-class ConvTransformerEncoder(FairseqEncoder):
- """Conv + Transformer encoder"""
-
- def __init__(self, args):
- """Construct an Encoder object."""
- super().__init__(None)
-
- self.dropout = args.dropout
- self.embed_scale = (
- 1.0 if args.no_scale_embedding else math.sqrt(args.encoder_embed_dim)
- )
- self.padding_idx = 1
- self.in_channels = 1
- self.input_dim = args.input_feat_per_channel
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, args.conv_out_channels, 3, stride=2, padding=3 // 2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(
- args.conv_out_channels,
- args.conv_out_channels,
- 3,
- stride=2,
- padding=3 // 2,
- ),
- torch.nn.ReLU(),
- )
- transformer_input_dim = self.infer_conv_output_dim(
- self.in_channels, self.input_dim, args.conv_out_channels
- )
- self.out = torch.nn.Linear(transformer_input_dim, args.encoder_embed_dim)
- self.embed_positions = PositionalEmbedding(
- args.max_source_positions,
- args.encoder_embed_dim,
- self.padding_idx,
- learned=False,
- )
-
- self.transformer_layers = nn.ModuleList([])
- self.transformer_layers.extend(
- [TransformerEncoderLayer(args) for i in range(args.encoder_layers)]
- )
- if args.encoder_normalize_before:
- self.layer_norm = LayerNorm(args.encoder_embed_dim)
- else:
- self.layer_norm = None
-
- def pooling_ratio(self):
- return 4
-
- def infer_conv_output_dim(self, in_channels, input_dim, out_channels):
- sample_seq_len = 200
- sample_bsz = 10
- x = torch.randn(sample_bsz, in_channels, sample_seq_len, input_dim)
- x = torch.nn.Conv2d(1, out_channels, 3, stride=2, padding=3 // 2)(x)
- x = torch.nn.Conv2d(out_channels, out_channels, 3, stride=2, padding=3 // 2)(x)
- x = x.transpose(1, 2)
- mb, seq = x.size()[:2]
- return x.contiguous().view(mb, seq, -1).size(-1)
-
- def forward(self, src_tokens, src_lengths):
- """Encode input sequence.
- :param torch.Tensor xs: input tensor
- :param torch.Tensor masks: input mask
- :return: position embedded tensor and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]:
- """
- bsz, max_seq_len, _ = src_tokens.size()
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
- x = self.conv(x)
- bsz, _, output_seq_len, _ = x.size()
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
- x = self.out(x)
- x = self.embed_scale * x
-
- subsampling_factor = int(max_seq_len * 1.0 / output_seq_len + 0.5)
- input_len_0 = (src_lengths.float() / subsampling_factor).ceil().long()
- input_len_1 = x.size(0) * torch.ones([src_lengths.size(0)]).long().to(
- input_len_0.device
- )
- input_lengths = torch.min(input_len_0, input_len_1)
-
- encoder_padding_mask = lengths_to_padding_mask(input_lengths)
-
- positions = self.embed_positions(encoder_padding_mask).transpose(0, 1)
- x += positions
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- for layer in self.transformer_layers:
- x = layer(x, encoder_padding_mask)
-
- if not encoder_padding_mask.any():
- maybe_encoder_padding_mask = None
- else:
- maybe_encoder_padding_mask = encoder_padding_mask
-
- return {
- "encoder_out": [x],
- "encoder_padding_mask": [maybe_encoder_padding_mask]
- if maybe_encoder_padding_mask is not None
- else [],
- "encoder_embedding": [],
- "encoder_states": [],
- "src_tokens": [],
- "src_lengths": [],
- }
-
- @torch.jit.export
- def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)]
- if len(encoder_out["encoder_padding_mask"]) == 0:
- new_encoder_padding_mask = []
- else:
- new_encoder_padding_mask = [
- (encoder_out["encoder_padding_mask"][0]).index_select(0, new_order)
- ]
- if len(encoder_out["encoder_embedding"]) == 0:
- new_encoder_embedding = []
- else:
- new_encoder_embedding = [
- (encoder_out["encoder_embedding"][0]).index_select(0, new_order)
- ]
- encoder_states = encoder_out["encoder_states"]
- if len(encoder_states) > 0:
- for idx, state in enumerate(encoder_states):
- encoder_states[idx] = state.index_select(1, new_order)
-
- return {
- "encoder_out": new_encoder_out,
- "encoder_padding_mask": new_encoder_padding_mask,
- "encoder_embedding": new_encoder_embedding,
- "encoder_states": encoder_states,
- "src_tokens": [],
- "src_lengths": [],
- }
-
-
-class TransformerDecoderNoExtra(TransformerDecoder):
- def extract_features(
- self,
- prev_output_tokens,
- encoder_out: Optional[Dict[str, List[Tensor]]],
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- full_context_alignment: bool = False,
- alignment_layer: Optional[int] = None,
- alignment_heads: Optional[int] = None,
- ):
- # call scriptable method from parent class
- x, _ = self.extract_features_scriptable(
- prev_output_tokens,
- encoder_out,
- incremental_state,
- full_context_alignment,
- alignment_layer,
- alignment_heads,
- )
- return x, None
-
-
-@register_model_architecture(model_name="convtransformer", arch_name="convtransformer")
-def base_architecture(args):
- args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.activation_dropout = getattr(args, "activation_dropout", 0.0)
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
- args.no_scale_embedding = getattr(args, "no_scale_embedding", False)
- args.quant_noise_pq = getattr(args, "quant_noise_pq", 0)
- args.max_source_positions = getattr(args, "max_source_positions", 3000)
- args.max_target_positions = getattr(args, "max_target_positions", 1024)
- args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False)
- args.conv_out_channels = getattr(args, "conv_out_channels", args.encoder_embed_dim)
-
-
-@register_model_architecture("convtransformer", "convtransformer_espnet")
-def convtransformer_espnet(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256)
- args.encoder_layers = getattr(args, "encoder_layers", 12)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/legacy_masked_lm.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/legacy_masked_lm.py
deleted file mode 100644
index 975497654926b64fff6c4960f54c4e6932e7fce1..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/tasks/legacy_masked_lm.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import itertools
-import logging
-import os
-
-import numpy as np
-from fairseq import tokenizer, utils
-from fairseq.data import ConcatDataset, Dictionary, data_utils, indexed_dataset
-from fairseq.data.legacy.block_pair_dataset import BlockPairDataset
-from fairseq.data.legacy.masked_lm_dataset import MaskedLMDataset
-from fairseq.data.legacy.masked_lm_dictionary import BertDictionary
-from fairseq.tasks import LegacyFairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_task("legacy_masked_lm")
-class LegacyMaskedLMTask(LegacyFairseqTask):
- """
- Task for training Masked LM (BERT) model.
- Args:
- dictionary (Dictionary): the dictionary for the input of the task
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument(
- "data",
- help="colon separated path to data directories list, \
- will be iterated upon during epochs in round-robin manner",
- )
- parser.add_argument(
- "--tokens-per-sample",
- default=512,
- type=int,
- help="max number of total tokens over all segments"
- " per sample for BERT dataset",
- )
- parser.add_argument(
- "--break-mode", default="doc", type=str, help="mode for breaking sentence"
- )
- parser.add_argument("--shuffle-dataset", action="store_true", default=False)
-
- def __init__(self, args, dictionary):
- super().__init__(args)
- self.dictionary = dictionary
- self.seed = args.seed
-
- @classmethod
- def load_dictionary(cls, filename):
- return BertDictionary.load(filename)
-
- @classmethod
- def build_dictionary(
- cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8
- ):
- d = BertDictionary()
- for filename in filenames:
- Dictionary.add_file_to_dictionary(
- filename, d, tokenizer.tokenize_line, workers
- )
- d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor)
- return d
-
- @property
- def target_dictionary(self):
- return self.dictionary
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- """Setup the task."""
- paths = utils.split_paths(args.data)
- assert len(paths) > 0
- dictionary = BertDictionary.load(os.path.join(paths[0], "dict.txt"))
- logger.info("dictionary: {} types".format(len(dictionary)))
-
- return cls(args, dictionary)
-
- def load_dataset(self, split, epoch=1, combine=False):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- loaded_datasets = []
-
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
- logger.info("data_path", data_path)
-
- for k in itertools.count():
- split_k = split + (str(k) if k > 0 else "")
- path = os.path.join(data_path, split_k)
- ds = indexed_dataset.make_dataset(
- path,
- impl=self.args.dataset_impl,
- fix_lua_indexing=True,
- dictionary=self.dictionary,
- )
-
- if ds is None:
- if k > 0:
- break
- else:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, data_path)
- )
-
- with data_utils.numpy_seed(self.seed + k):
- loaded_datasets.append(
- BlockPairDataset(
- ds,
- self.dictionary,
- ds.sizes,
- self.args.tokens_per_sample,
- break_mode=self.args.break_mode,
- doc_break_size=1,
- )
- )
-
- logger.info(
- "{} {} {} examples".format(data_path, split_k, len(loaded_datasets[-1]))
- )
-
- if not combine:
- break
-
- if len(loaded_datasets) == 1:
- dataset = loaded_datasets[0]
- sizes = dataset.sizes
- else:
- dataset = ConcatDataset(loaded_datasets)
- sizes = np.concatenate([ds.sizes for ds in loaded_datasets])
-
- self.datasets[split] = MaskedLMDataset(
- dataset=dataset,
- sizes=sizes,
- vocab=self.dictionary,
- pad_idx=self.dictionary.pad(),
- mask_idx=self.dictionary.mask(),
- classif_token_idx=self.dictionary.cls(),
- sep_token_idx=self.dictionary.sep(),
- shuffle=self.args.shuffle_dataset,
- seed=self.seed,
- )
diff --git a/spaces/multimodalart/upload_your_model/style.css b/spaces/multimodalart/upload_your_model/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/multimodalart/upload_your_model/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/mygyasir/masterful-gligen-1-4-inpainting-text-box1/app.py b/spaces/mygyasir/masterful-gligen-1-4-inpainting-text-box1/app.py
deleted file mode 100644
index feb15e83985bafdd6a74363fcfd22ba054010007..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/masterful-gligen-1-4-inpainting-text-box1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.load("models/masterful/gligen-1-4-inpainting-text-box").launch()
\ No newline at end of file
diff --git a/spaces/nickmuchi/license-plate-detection-with-YOLOS/app.py b/spaces/nickmuchi/license-plate-detection-with-YOLOS/app.py
deleted file mode 100644
index 2a42ec2c191e69552622f53054d7cc9ede991664..0000000000000000000000000000000000000000
--- a/spaces/nickmuchi/license-plate-detection-with-YOLOS/app.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import io
-import gradio as gr
-import matplotlib.pyplot as plt
-import requests, validators
-import torch
-import pathlib
-from PIL import Image
-from transformers import AutoFeatureExtractor, YolosForObjectDetection, DetrForObjectDetection
-import os
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-# colors for visualization
-COLORS = [
- [0.000, 0.447, 0.741],
- [0.850, 0.325, 0.098],
- [0.929, 0.694, 0.125],
- [0.494, 0.184, 0.556],
- [0.466, 0.674, 0.188],
- [0.301, 0.745, 0.933]
-]
-
-def make_prediction(img, feature_extractor, model):
- inputs = feature_extractor(img, return_tensors="pt")
- outputs = model(**inputs)
- img_size = torch.tensor([tuple(reversed(img.size))])
- processed_outputs = feature_extractor.post_process(outputs, img_size)
- return processed_outputs[0]
-
-def fig2img(fig):
- buf = io.BytesIO()
- fig.savefig(buf)
- buf.seek(0)
- pil_img = Image.open(buf)
- basewidth = 750
- wpercent = (basewidth/float(pil_img.size[0]))
- hsize = int((float(pil_img.size[1])*float(wpercent)))
- img = pil_img.resize((basewidth,hsize), Image.Resampling.LANCZOS)
- return img
-
-
-def visualize_prediction(img, output_dict, threshold=0.5, id2label=None):
- keep = output_dict["scores"] > threshold
- boxes = output_dict["boxes"][keep].tolist()
- scores = output_dict["scores"][keep].tolist()
- labels = output_dict["labels"][keep].tolist()
-
- if id2label is not None:
-
- labels = [id2label[x] for x in labels]
-
-
- plt.figure(figsize=(50, 50))
- plt.imshow(img)
- ax = plt.gca()
- colors = COLORS * 100
- for score, (xmin, ymin, xmax, ymax), label, color in zip(scores, boxes, labels, colors):
- if label == 'license-plates':
- ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, color=color, linewidth=10))
- ax.text(xmin, ymin, f"{label}: {score:0.2f}", fontsize=60, bbox=dict(facecolor="yellow", alpha=0.8))
- plt.axis("off")
- return fig2img(plt.gcf())
-
-def get_original_image(url_input):
- if validators.url(url_input):
- image = Image.open(requests.get(url_input, stream=True).raw)
-
- return image
-
-def detect_objects(model_name,url_input,image_input,webcam_input,threshold):
-
- #Extract model and feature extractor
- feature_extractor = AutoFeatureExtractor.from_pretrained(model_name)
-
- if "yolos" in model_name:
- model = YolosForObjectDetection.from_pretrained(model_name)
- elif "detr" in model_name:
- model = DetrForObjectDetection.from_pretrained(model_name)
-
- if validators.url(url_input):
- image = get_original_image(url_input)
-
- elif image_input:
- image = image_input
-
- elif webcam_input:
- image = webcam_input
-
- #Make prediction
- processed_outputs = make_prediction(image, feature_extractor, model)
-
- #Visualize prediction
- viz_img = visualize_prediction(image, processed_outputs, threshold, model.config.id2label)
-
- return viz_img
-
-def set_example_image(example: list) -> dict:
- return gr.Image.update(value=example[0])
-
-def set_example_url(example: list) -> dict:
- return gr.Textbox.update(value=example[0]), gr.Image.update(value=get_original_image(example[0]))
-
-
-title = """License Plate Detection with YOLOS """
-
-description = """
-YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
-The YOLOS model was fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
-This model was further fine-tuned on the [Car license plate dataset]("https://www.kaggle.com/datasets/andrewmvd/car-plate-detection") from Kaggle. The dataset consists of 443 images of vehicle with annotations categorised as "Vehicle" and "Rego Plates". The model was trained for 200 epochs on a single GPU.
-Links to HuggingFace Models:
-- [nickmuchi/yolos-small-rego-plates-detection](https://huggingface.co/nickmuchi/yolos-small-rego-plates-detection)
-- [hustlv/yolos-small](https://huggingface.co/hustlv/yolos-small)
-"""
-
-models = ["nickmuchi/yolos-small-finetuned-license-plate-detection","nickmuchi/detr-resnet50-license-plate-detection"]
-urls = ["https://drive.google.com/uc?id=1j9VZQ4NDS4gsubFf3m2qQoTMWLk552bQ","https://drive.google.com/uc?id=1p9wJIqRz3W50e2f_A0D8ftla8hoXz4T5"]
-images = [[path.as_posix()] for path in sorted(pathlib.Path('images').rglob('*.j*g'))]
-
-twitter_link = """
-[](https://twitter.com/nickmuchi)
-"""
-
-css = '''
-h1#title {
- text-align: center;
-}
-'''
-demo = gr.Blocks(css=css)
-
-with demo:
- gr.Markdown(title)
- gr.Markdown(description)
- gr.Markdown(twitter_link)
- options = gr.Dropdown(choices=models,label='Object Detection Model',value=models[0],show_label=True)
- slider_input = gr.Slider(minimum=0.2,maximum=1,value=0.5,step=0.1,label='Prediction Threshold')
-
- with gr.Tabs():
- with gr.TabItem('Image URL'):
- with gr.Row():
- with gr.Column():
- url_input = gr.Textbox(lines=2,label='Enter valid image URL here..')
- original_image = gr.Image(shape=(750,750))
- url_input.change(get_original_image, url_input, original_image)
- with gr.Column():
- img_output_from_url = gr.Image(shape=(750,750))
-
- with gr.Row():
- example_url = gr.Examples(examples=urls,inputs=[url_input])
-
-
- url_but = gr.Button('Detect')
-
- with gr.TabItem('Image Upload'):
- with gr.Row():
- img_input = gr.Image(type='pil',shape=(750,750))
- img_output_from_upload= gr.Image(shape=(750,750))
-
- with gr.Row():
- example_images = gr.Examples(examples=images,inputs=[img_input])
-
-
- img_but = gr.Button('Detect')
-
- with gr.TabItem('WebCam'):
- with gr.Row():
- web_input = gr.Image(source='webcam',type='pil',shape=(750,750),streaming=True)
- img_output_from_webcam= gr.Image(shape=(750,750))
-
- cam_but = gr.Button('Detect')
-
- url_but.click(detect_objects,inputs=[options,url_input,img_input,web_input,slider_input],outputs=[img_output_from_url],queue=True)
- img_but.click(detect_objects,inputs=[options,url_input,img_input,web_input,slider_input],outputs=[img_output_from_upload],queue=True)
- cam_but.click(detect_objects,inputs=[options,url_input,img_input,web_input,slider_input],outputs=[img_output_from_webcam],queue=True)
-
- gr.Markdown("")
-
-
-demo.launch(debug=True,enable_queue=True)
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_coco_evaluation.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_coco_evaluation.py
deleted file mode 100644
index 964f00284df64d3378ebfe32913c07deb5a1f819..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/data/test_coco_evaluation.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import copy
-import io
-import json
-import numpy as np
-import os
-import tempfile
-import unittest
-import torch
-from pycocotools.coco import COCO
-from pycocotools.cocoeval import COCOeval
-
-from detectron2.data import DatasetCatalog
-from detectron2.evaluation import COCOEvaluator
-from detectron2.evaluation.fast_eval_api import COCOeval_opt
-from detectron2.structures import Boxes, Instances
-
-
-class TestCOCOeval(unittest.TestCase):
- def test_fast_eval(self):
- # A small set of images/categories from COCO val
- # fmt: off
- detections = [{"image_id": 139, "category_id": 1, "bbox": [417.3332824707031, 159.27003479003906, 47.66064453125, 143.00193786621094], "score": 0.9949821829795837, "segmentation": {"size": [426, 640], "counts": "Tc`52W=3N0N4aNN^E7]:4XE1g:8kDMT;U100000001O1gE[Nk8h1dFiNY9Z1aFkN]9g2J3NdN`FlN`9S1cFRN07]9g1bFoM6;X9c1cFoM=8R9g1bFQN>3U9Y30O01OO1O001N2O1N1O4L4L5UNoE3V:CVF6Q:@YF9l9@ZF 0 else 0.0
- msg = "%s: comparing COCO APIs, %s differs by %f" % (name, k, abs_diff)
- self.assertTrue(abs_diff < 1e-4, msg=msg)
-
- def test_unknown_category(self):
- dataset = "coco_2017_val_100"
- evaluator = COCOEvaluator(dataset)
- evaluator.reset()
- inputs = DatasetCatalog.get(dataset)[:2]
- pred = Instances((100, 100))
- pred.pred_boxes = Boxes(torch.rand(2, 4))
- pred.scores = torch.rand(2)
- pred.pred_classes = torch.tensor([10, 80])
- output = {"instances": pred}
- evaluator.process(inputs, [output, output])
- with self.assertRaises(AssertionError):
- evaluator.evaluate()
diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_002.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_002.js
deleted file mode 100644
index 1d8cc354d9ba258f24c21678ea86a0d0d1336b42..0000000000000000000000000000000000000000
--- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_002.js
+++ /dev/null
@@ -1,220 +0,0 @@
-CKEDITOR.plugins.add( 'quicktable', {
- requires: 'table,panelbutton,floatpanel',
- lang: 'en',
- afterInit: function( editor ) {
- var conf = editor.config,
- quickRows = conf.qtRows || 9,
- quickColumns = conf.qtColumns || 9,
- quickBorder = conf.qtBorder || '1',
- quickStyle = conf.qtStyle || null,
- quickClass = conf.qtClass || '',
- quickCellPadding = conf.qtCellPadding || '1',
- quickCellSpacing = conf.qtCellSpacing || '1',
- quickWidth = conf.qtWidth || '500px',
- quickPreviewSize = conf.qtPreviewSize || '14px',
- quickPreviewBorder = conf.qtPreviewBorder || '1px solid #aaa',
- quickPreviewBackground = conf.qtPreviewBackground || '#e5e5e5';
-
- function makeElement( name ) {
- return new CKEDITOR.dom.element( name, editor.document );
- }
-
- function insertTable( rowCount, columnCount ) {
- var table = makeElement( 'table' );
- var tbody = table.append( makeElement( 'tbody' ) );
-
- for ( var i = 0; i < rowCount; i++ ) {
- var row = tbody.append( makeElement( 'tr' ) );
- for ( var j = 0; j < columnCount; j++ ) {
- var cell = row.append( makeElement( 'td' ) );
- cell.appendBogus();
- }
- }
-
- conf.qtCellPadding !== null && table.setAttribute( 'cellpadding', quickCellPadding );
- conf.qtCellSpacing !== null && table.setAttribute( 'cellspacing', quickCellSpacing );
- conf.qtBorder !== null && table.setAttribute( 'border', quickBorder );
- table.setAttribute( 'class', quickClass );
- table.setStyles( quickStyle );
- conf.qtWidth !== null && table.setStyle( 'width', quickWidth );
- editor.insertElement( table );
-
- // Fire event for showborders plugin (so hidden borders are visible)
- editor.fire('removeFormatCleanup', table);
- }
-
- function renderQuickTable(panel) {
- var output = [];
-
- var clickFn = CKEDITOR.tools.addFunction( function( i, j ) {
- insertTable( parseInt( i, 10 ) + 1, parseInt( j, 10 ) + 1 );
- panel.hide();
- } );
-
- output.push( '' +
- ' ' );
-
- for ( var i = 0; i < quickRows; i++ ) {
- output.push( '' );
- for ( var j = 0; j < quickColumns; j++ ) {
- output.push( ' ' );
- }
- output.push(' ');
- }
-
- output.push( '
' );
-
- return output.join( '' );
- }
-
- var selection = {row: -1, column: -1};
- function select( label, table, rowCount, columnCount ) {
- var rows = table.$.tBodies[0].rows;
- for ( var i = 0; i < rows.length; i++ ) {
- var cells = rows[i].cells;
- for ( var j = 0; j < cells.length; j++ ) {
- var cell = cells[j];
- if ( i < rowCount && j < columnCount ) {
- cell.style.background = quickPreviewBackground;
- } else {
- cell.style.background = '';
- }
- }
- }
- selection.row = rowCount - 1;
- selection.column = columnCount - 1;
- label.setText( rowCount + ' × ' + columnCount + ' ' + editor.lang.table.toolbar );
- }
-
- editor.ui.add( 'Table', CKEDITOR.UI_PANELBUTTON, {
- label: editor.lang.table.toolbar,
- command: 'table',
- modes: { wysiwyg: 1 },
- editorFocus: 0,
- toolbar: 'insert,30',
-
- caption: null,
- table: null,
-
- panel: {
- css: CKEDITOR.skin.getPath( 'editor' ),
- attributes: { role: 'listbox', 'aria-label': editor.lang.table.toolbar }
- },
-
- onBlock: function( panel, block ) {
- block.autoSize = true;
- block.element.addClass( 'cke_colorblock' );
-
- var caption = new CKEDITOR.dom.element( 'div' );
- caption.setStyles( { 'text-align': 'center', 'margin': '3px 0' } );
- block.element.append( caption );
- this.caption = caption;
-
- var tableWrapper = CKEDITOR.dom.element.createFromHtml( renderQuickTable(panel) );
- this.table = this.addEvents(tableWrapper);
- block.element.append( tableWrapper );
-
- var moreButton = this.createMoreButton();
- block.element.append( moreButton );
-
- CKEDITOR.ui.fire( 'ready', this );
-
- block.keys = this.assignKeys(block.keys);
- },
-
- assignKeys: function(keys){
- var rtl = editor.lang.dir == 'rtl';
- keys[ rtl ? 37 : 39 ] = 'next'; // ARROW-RIGHT
- keys[ 40 ] = 'next'; // ARROW-DOWN
- keys[ 9 ] = 'next'; // TAB
- keys[ rtl ? 39 : 37 ] = 'prev'; // ARROW-LEFT
- keys[ 38 ] = 'prev'; // ARROW-UP
- keys[ CKEDITOR.SHIFT + 9 ] = 'prev'; // SHIFT + TAB
- keys[ 32 ] = 'click'; // SPACE
- return keys;
- },
-
- addEvents: function(tableWrapper){
- var table = this.table = tableWrapper.getFirst();
- var caption = this.caption;
- table.on( 'mouseleave', function( evt ) {
- select( caption, table, 1, 1 );
- } );
- table.on( 'mousemove', function( evt ) {
- var target = evt.data.getTarget();
- if ( target.getName() == 'td' ) {
- var i = parseInt( target.getAttribute( 'data-i' ), 10 );
- var j = parseInt( target.getAttribute( 'data-j' ), 10 );
- select( caption, table, i + 1, j + 1 );
- }
- } );
- tableWrapper.on( 'keydown', function( evt ) {
- var keystroke = evt.data.getKeystroke(),
- row = selection.row,
- column = selection.column;
-
- switch ( keystroke ) {
- case 37: // ARROW-LEFT
- column--;
- break;
- case 39: // ARROW-RIGHT
- column++;
- break;
- case 40: // ARROW-DOWN
- row++;
- break;
- case 38: // ARROW-UP
- row--;
- break;
- case 13: // ENTER
- case 32: // SPACE
- insertTable( row + 1, column + 1 );
- return;
- default:
- return;
- }
-
- if ( row < 0 || column < 0 ) {
- this.panel.hide();
- return;
- }
-
- if ( row > quickRows - 1 || column > quickColumns - 1 ) {
- editor.execCommand( 'table' );
- }
- select( caption, table, row + 1, column + 1 );
- evt.data.preventDefault();
- evt.data.stopPropagation();
- });
-
- return table;
- },
-
- createMoreButton: function() {
- var moreButton = new CKEDITOR.dom.element( 'a' );
- moreButton.setAttributes( {
- _cke_focus: 1,
- title: "",
- hidefocus: true,
- href: 'javascript:void("More")',
- role: 'option'
- } );
- moreButton.addClass( 'cke_colormore' );
- moreButton.setText( "More" );
- moreButton.setStyle( 'text-align', 'center' );
- moreButton.on( 'click', function( evt ) {
- editor.execCommand( 'table' );
- evt.data.preventDefault();
- } );
-
- return moreButton;
- },
-
- onOpen: function() {
- select( this.caption, this.table, 1, 1 );
- }
- } );
- }
-});
\ No newline at end of file
diff --git a/spaces/ntt123/vietnamese-handwriting/README.md b/spaces/ntt123/vietnamese-handwriting/README.md
deleted file mode 100644
index 47e567be0c6a5ddb6dc272d44958549d3d25f956..0000000000000000000000000000000000000000
--- a/spaces/ntt123/vietnamese-handwriting/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Vietnamese Handwriting
-emoji: 🏃
-colorFrom: indigo
-colorTo: blue
-sdk: static
-pinned: false
-license: cc-by-nc-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/ofikodar/chatgpt-resume-builder/src/pdf_handler.py b/spaces/ofikodar/chatgpt-resume-builder/src/pdf_handler.py
deleted file mode 100644
index d69920c095c6d038fc370f67ca2a631145ea1040..0000000000000000000000000000000000000000
--- a/spaces/ofikodar/chatgpt-resume-builder/src/pdf_handler.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import PyPDF2
-from jinja2 import FileSystemLoader, Environment
-
-
-def parse_pdf(pdf_file):
- if pdf_file is isinstance(pdf_file, str):
- with open(pdf_file, "rb") as file:
- return _parse(file)
- else:
- return _parse(pdf_file)
-
-
-def _parse(file):
- reader = PyPDF2.PdfReader(file)
- pdf_text = []
- num_pages = len(reader.pages)
- # Iterate over each page
- for page_number in range(num_pages):
- # Get the current page
- page = reader.pages[page_number]
-
- # Extract the text from the page
- page_text = page.extract_text()
-
- pdf_text.append(page_text)
- pdf_text = '\n'.join(pdf_text)
- return pdf_text, num_pages
-
-
-def build_html_resume(data):
- env = Environment(loader=FileSystemLoader('src/templates'))
- template = env.get_template('resume.html')
- html_resume = template.render(data)
- return html_resume
-
-
-def export_html(html_resume, output_path):
- with open(output_path, 'w', encoding='utf8') as f:
- f.write(html_resume)
diff --git a/spaces/onnx/mask-rcnn/README.md b/spaces/onnx/mask-rcnn/README.md
deleted file mode 100644
index 1b196cd6371d04bfae2b3457b2a5239e5321479b..0000000000000000000000000000000000000000
--- a/spaces/onnx/mask-rcnn/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mask Rcnn
-emoji: 💻
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.8.13
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/train/README.md b/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/train/README.md
deleted file mode 100644
index 672ac47e217e6f06f4f8c295a890a84d1e021910..0000000000000000000000000000000000000000
--- a/spaces/openflamingo/OpenFlamingo/open_flamingo/open_flamingo/train/README.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# OpenFlamingo Training
-To train OpenFlamingo, please ensure your environment matches that of `environment.yml`.
-
-## Data
-Our codebase uses [WebDataset](https://github.com/webdataset/webdataset) to efficiently load `.tar` files containing image and text sequences. We recommend resampling shards with replacement during training using the `--dataset_resampled` flag.
-
-### LAION-2B Dataset
-[LAION-2B](https://arxiv.org/abs/2210.08402) contains 2B web-scraped (image, text) pairs.
-We use [img2dataset](https://github.com/rom1504/img2dataset) to download this dataset into tar files.
-
-### Multimodal C4 Dataset
-We train on the full version of [Multimodal C4 (MMC4)](https://github.com/allenai/mmc4), which includes 103M documents of web-scraped, interleaved image-text sequences. During training, we truncate sequences to 256 text tokens and six images per sequence.
-
-Our codebase expects `.tar` files containing `.json` files, which include raw images encoded in base64.
-We provide scripts to convert MMC4 to this format:
-
-1. Download the MMC4 shards into `.zip` files using [the MMC4-provided scripts](https://github.com/allenai/mmc4/tree/main/scripts) (e.g., `fewer_facesv2.sh`).
-2. Download the MMC4 raw images into an image directory using [the MMC4-provided scripts](https://github.com/allenai/mmc4/tree/main/scripts) (e.g., `download_images.py`).
-2. Run `scripts/convert_mmc4_to_wds.py` to convert the downloaded items into the expected tar files.
-
-### ChatGPT-generated sequences
-A subset of our models (listed below) were also trained on experimental ChatGPT-generated (image, text) sequences, where images are pulled from LAION. We are working to release these sequences soon.
-
-* OpenFlamingo-4B-vitl-rpj3b
-* OpenFlamingo-4B-vitl-rpj3b-langinstruct
-
-## Example training command
-We provide a sample Slurm training script in `scripts/`. You can also modify the following command:
-
-```
-torchrun --nnodes=1 --nproc_per_node=4 train.py \
- --lm_path anas-awadalla/mpt-1b-redpajama-200b \
- --tokenizer_path anas-awadalla/mpt-1b-redpajama-200b \
- --cross_attn_every_n_layers 1 \
- --dataset_resampled \
- --batch_size_mmc4 32 \
- --batch_size_laion 64 \
- --train_num_samples_mmc4 125000\
- --train_num_samples_laion 250000 \
- --loss_multiplier_laion 0.2 \
- --workers=4 \
- --run_name OpenFlamingo-3B-vitl-mpt1b \
- --num_epochs 480 \
- --warmup_steps 1875 \
- --mmc4_textsim_threshold 0.24 \
- --laion_shards "/path/to/shards/shard-{0000..0999}.tar" \
- --mmc4_shards "/path/to/shards/shard-{0000..0999}.tar" \
- --report_to_wandb
-```
-*Note: The MPT-1B [base](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) and [instruct](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) modeling code does not accept the `labels` kwarg or compute cross-entropy loss directly within `forward()`, as expected by our codebase. We suggest using a modified version of the MPT-1B models found [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b) and [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b-dolly).*
-
-## Distributed training
-
-By default, `train.py` uses Pytorch's [DistributedDataParallel](https://pytorch.org/docs/stable/torch.nn.parallel.DistributedDataParallel.html) for training.
-To use [FullyShardedDataParallel](https://pytorch.org/docs/stable/fsdp.html), use the `--fsdp` flag.
-
-Some notes on FSDP:
-
-* We recommend using the `--fsdp_use_orig_params` flag. If `--fsdp` is on without this flag, all language model embeddings will be unfrozen during training. (In contrast, the default behavior is to only train the newly added `` and `<|endofchunk|>` tokens.)
- * Note: we've encountered issues using OPT with this flag. Other language models should be compatible.
-* Our current FSDP wrapping strategy does not permit training language model embeddings that use tied weights (i.e., tied input / output embeddings). To train such models with FSDP, the language model embeddings must be frozen with the `--freeze_lm_embeddings` flag.
-
-We also implement gradient checkpointing and mixed precision training. Use the `--gradient_checkpointing` and `--precision` arguments respectively.
\ No newline at end of file
diff --git a/spaces/osanseviero/flask_test/app.py b/spaces/osanseviero/flask_test/app.py
deleted file mode 100644
index 138335e3f8d5491cb2b2acc4aabe51b497c3804b..0000000000000000000000000000000000000000
--- a/spaces/osanseviero/flask_test/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import flask
-import os
-from dotenv import load_dotenv
-load_dotenv()
-
-app = flask.Flask(__name__, template_folder="./")
-
-
-@app.route('/')
-def index():
- return flask.render_template('index.html')
-
-if __name__ == '__main__':
- app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 7860)))
diff --git a/spaces/parkyzh/bingo/src/components/toaster.tsx b/spaces/parkyzh/bingo/src/components/toaster.tsx
deleted file mode 100644
index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000
--- a/spaces/parkyzh/bingo/src/components/toaster.tsx
+++ /dev/null
@@ -1,3 +0,0 @@
-'use client'
-
-export { Toaster } from 'react-hot-toast'
diff --git a/spaces/parvezalmuqtadir/stablediffusionapi-vector-art/app.py b/spaces/parvezalmuqtadir/stablediffusionapi-vector-art/app.py
deleted file mode 100644
index caa5ba5a516a05def90868f69f34c9d2d904cebf..0000000000000000000000000000000000000000
--- a/spaces/parvezalmuqtadir/stablediffusionapi-vector-art/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stablediffusionapi/vector-art").launch()
\ No newline at end of file
diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/train_hybrid_bottom.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/train_hybrid_bottom.py
deleted file mode 100644
index ec32c3208760275d7ff9b93872ede55a7f6d75e0..0000000000000000000000000000000000000000
--- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/train_hybrid_bottom.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import torch, multiprocessing, itertools, os, shutil, PIL, argparse, numpy
-from torch.nn.functional import mse_loss
-from collections import defaultdict, OrderedDict
-from seeing import encoder_net, setting
-from seeing.encoder_loss import cor_square_error
-from seeing import zdataset, pbar, nethook
-from seeing import proggan, customnet, parallelfolder
-from torchvision import transforms, models
-from seeing.pidfile import exit_if_job_done, mark_job_done
-
-torch.backends.cudnn.benchmark = True
-
-parser = argparse.ArgumentParser()
-parser.add_argument('--lr', type=float, help='Learning rate', default=None)
-parser.add_argument('--model', type=str, help='Dataset being modeled',
- default='church')
-parser.add_argument('--bottom', type=int, help='Number of bottom layers',
- default=4)
-parser.add_argument('--recover', nargs='+', default=None,
- help='recovey losses: subset of layer1, layer2, layer3, layer4, x')
-args = parser.parse_args()
-
-global_seed = 1
-variant = None
-if args.lr is None:
- args.lr = {1: 1.5e-4, 2: 1.5e-4, 3: 1.5e-4, 4: 1.5e-4}.get(
- args.bottom, 1.5e-4)
-if args.recover == None:
- variant = ['b%d' % args.bottom]
- args.recover = ['z'] + ['layer%d' % i
- for i in range(1, min(4, args.bottom) + 1)] + (
- [] if args.bottom < 5 else ['x'])
-if variant is None:
- variant = sum([
- [prefix + str(k).replace('layer', '') for k in lst]
- for prefix, lst in [
- ('b', args.bottom),
- ('r', args.recover)]
- ], [])
-
-# make a directory name like 'invert_hybrid_s1_rz_r2_r4_rx'
-# default '_cse' corresponds to '_i4_rz_r4_rx'
-expname = 'invert_hybrid_bottom_' + '_'.join(variant)
-expdir = os.path.join('results', args.model, expname)
-os.makedirs(expdir, exist_ok=True)
-
-num_epochs = 1000
-# lr_milestones = [50, 150, 350] # Reduce learning rate after these epochs
-lr_milestones = [500]
-lr_gamma = 0.2
-
-def main():
- torch.manual_seed(global_seed)
- pbar.print('Training %s' % expname)
-
- # Log the configuration
- logline = '; '.join('%s: %r' % (k, v) for k, v in vars(args).items())
- with open(os.path.join(expdir, 'log.txt'), 'a') as f:
- f.write(logline + '\n')
- pbar.print(logline)
-
- # Load a progressive GAN
- whole_generator = setting.load_proggan(args.model)
- if args.bottom < 5:
- # Truncate it to the specified depth of layers.
- generator = nethook.subsequence(whole_generator,
- last_layer='layer%d' % args.bottom)
- renderer = nethook.subsequence(whole_generator,
- first_layer='layer%d' % (args.bottom + 1))
- else:
- generator = whole_generator
- renderer = torch.nn.Sequential()
- # Make a stacked encoder, and initialize with pretrained layers
- encoder = encoder_net.HybridLayerNormEncoder()
- if args.bottom < 5:
- # Truncate the encoder to the matching set of layers
- encoder = nethook.subsequence(encoder,
- first_layer='inv%d' % args.bottom)
- if args.bottom > 1:
- # Load a pretrained stack of encoder layers (all but the last one)
- prev_filename = (os.path.join('results', args.model,
- 'invert_hybrid_bottom_b%d/snapshots/epoch_1000.pth.tar'
- % (args.bottom - 1)))
- encoder.load_state_dict(torch.load(prev_filename)['state_dict'],
- strict=False)
- # Load a pretrained single layer (just the last one)
- if args.bottom < 5:
- layer = getattr(encoder, 'inv%d' % args.bottom)
- layer_filename = os.path.join('results', args.model,
- 'invert_layer_%d_cse/snapshots/epoch_100.pth.tar'
- % args.bottom)
- else:
- layer = encoder.resnet
- layer_filename = os.path.join('results', args.model,
- 'invert_over5_resnet/snapshots/epoch_100.pth.tar')
- pbar.print('Loading %s' % layer_filename)
- layer.load_state_dict(torch.load(layer_filename)['state_dict'])
-
- # Instrument the generator model so that we can add # extra
- # loss terms based on reconstruction of intermediate layers.
- generator = nethook.InstrumentedModel(generator)
- retained_layer_list = [n for n in args.recover if n not in ['x', 'z']]
- generator.retain_layers(retained_layer_list, detach=False)
-
- # Move models to GPU
- for m in [generator, encoder, renderer]:
- m.cuda()
-
- # Set up a training data loader: unending batches of random z.
- batch_size = 32
- train_loader = training_loader(generator, batch_size)
-
- # Test data loader is finite, fixed set of z.
- test_loader = testing_loader(generator, batch_size)
-
- # Set up optimizer
- set_requires_grad(False, generator)
- optimize_conv3 = False
- if optimize_conv3:
- target_params = encoder.parameters()
- else:
- target_params = []
- # The conv3 filters of each layer are redundant with
- # the conv1 in the following layer, so freeze the conv3 layers.
- for n, p in encoder.named_parameters():
- if (n.startswith('inv') and not n.startswith('inv1.')
- and 'conv3' in n):
- p.requires_grad = False
- else:
- target_params.append(p)
- learning_rate = args.lr
- optimizer = torch.optim.Adam(target_params, lr=learning_rate)
- scheduler = torch.optim.lr_scheduler.MultiStepLR(
- optimizer, milestones=lr_milestones, gamma=lr_gamma)
-
- epoch_batches = 100
- for epoch, epoch_loader in enumerate(pbar(
- epoch_grouper(train_loader, epoch_batches), total=(1+num_epochs))):
- # Training loop (for 0th epoch, do no training, just testing)
- if epoch > 0:
- for (z_batch,) in pbar(epoch_loader, total=epoch_batches):
- (z_batch,) = [d.cuda() for d in [z_batch]]
- loss = encoder_loss(z_batch, generator, encoder)
- loss.backward()
- pbar.post(l=loss.item())
- optimizer.step()
- scheduler.step()
- # Testing loop
- with torch.no_grad():
- losses = defaultdict(float)
- count = 0
- for i, (z_batch,) in enumerate(pbar(test_loader)):
- (z_batch,) = [d.cuda() for d in [z_batch]]
- nb = len(z_batch)
- # Some other debugging losses
- count += nb
- losses['loss'] += nb * (
- encoder_loss(z_batch, generator, encoder).item())
- for name, mloss in monitor_losses(
- z_batch, generator, encoder).items():
- losses[name] += nb * mloss.item()
- if epoch % 10 == 0 and i == 0:
- visualize_results(epoch, z_batch, generator, encoder,
- renderer)
- losses = { name: loss / count for name, loss in losses.items() }
- logline = '%d ' % epoch + ' '.join("%s=%4g" % (name, losses[name])
- for name in sorted(losses.keys()))
- pbar.print(logline)
- with open(os.path.join(expdir, 'log.txt'), 'a') as f:
- f.write(logline + '\n')
- if epoch % 10 == 0:
- save_checkpoint(
- epoch=epoch,
- state_dict=encoder.state_dict(),
- lr=learning_rate,
- optimizer=optimizer.state_dict(),
- **losses)
- if epoch == num_epochs:
- break
-
-def save_checkpoint(**kwargs):
- dirname = os.path.join(expdir, 'snapshots')
- os.makedirs(dirname, exist_ok=True)
- filename = 'epoch_%d.pth.tar' % kwargs['epoch']
- torch.save(kwargs, os.path.join(dirname, filename))
-
-def visualize_results(epoch, true_z, generator, encoder, renderer):
- dirname = os.path.join(expdir, 'images')
- os.makedirs(dirname, exist_ok=True)
- true_r, recovered_r = (
- generate_and_recover_features(true_z, generator, encoder))
- true_im, recovered_im = [renderer(d['x']) for d in [true_r, recovered_r]]
- num_images = 6
- for i in range(min(len(true_z), num_images)):
- for name, im in [
- ('epoch_%d_%d_g.png', true_im),
- ('epoch_%d_%d_r.png', recovered_im),
- ]:
- save_tensor_image(im[i], os.path.join(dirname, name % (epoch, i)))
- rawdat = OrderedDict(sum([
- [(template % k.replace('layer', ''), v[i].cpu().numpy())
- for k, v in feats.items()]
- for template, feats in [
- ('t_%s', true_r),
- ('r_%s', recovered_r)]], []))
- numpy.savez(os.path.join(dirname, 'epoch_%d_%d.npz' % (epoch, i)),
- **rawdat)
- shutil.copy('seeing/lightbox.html',
- os.path.join(dirname, '+lightbox.html'))
-
-def save_tensor_image(img, filename):
- np_data = ((img.permute(1, 2, 0) / 2 + 0.5) * 255).byte().cpu().numpy()
- PIL.Image.fromarray(np_data).save(filename)
-
-def generate_and_recover_features(true_z, generator, encoder):
- global args
- true_x = generator(true_z)
- true_r = generator.retained_features(clear=True)
- true_r['z'] = true_z
- true_r['x'] = true_x
- recovered_z = encoder(true_x)
- recovered_x = generator(recovered_z)
- recovered_r = generator.retained_features(clear=True)
- recovered_r['z'] = recovered_z
- recovered_r['x'] = recovered_x
- return true_r, recovered_r
-
-def monitor_losses(true_z, generator, encoder, all_losses=True):
- global args
- true_r, recovered_r = (
- generate_and_recover_features(true_z, generator, encoder))
- losses = {}
- for layer in recovered_r.keys() if all_losses else args.recover:
- if layer == 'x':
- losses['rx'] = (
- mse_loss(true_r[layer], recovered_r[layer]))
- else:
- losses['r' + layer.replace('layer', '')] = (
- cor_square_error(true_r[layer], recovered_r[layer]))
- return losses
-
-def encoder_loss(true_z, generator, encoder):
- return sum(monitor_losses(true_z, generator, encoder,
- all_losses=False).values())
-
-def training_loader(z_generator, batch_size):
- '''
- Returns an infinite generator that runs through randomized z
- batches, forever.
- '''
- g_epoch = 1
- while True:
- z_data = zdataset.z_dataset_for_model(
- z_generator, size=10000, seed=g_epoch + global_seed)
- dataloader = torch.utils.data.DataLoader(
- z_data,
- shuffle=False,
- batch_size=batch_size,
- num_workers=10,
- pin_memory=True)
- for batch in dataloader:
- yield batch
- g_epoch += 1
-
-def testing_loader(z_generator, batch_size):
- '''
- Returns an a short iterator that returns a small set of test data.
- '''
- z_data = zdataset.z_dataset_for_model(
- z_generator, size=1000, seed=global_seed)
- dataloader = torch.utils.data.DataLoader(
- z_data,
- shuffle=False,
- batch_size=batch_size,
- num_workers=10,
- pin_memory=True)
- return dataloader
-
-def epoch_grouper(loader, epoch_size):
- '''
- To use with the infinite training loader: groups the training data
- batches into epochs of the given size.
- '''
- it = iter(loader)
- while True:
- chunk_it = itertools.islice(it, epoch_size)
- try:
- first_el = next(chunk_it)
- except StopIteration:
- return
- yield itertools.chain((first_el,), chunk_it)
-
-def set_requires_grad(requires_grad, *models):
- for model in models:
- if model is not None:
- for param in model.parameters():
- param.requires_grad = requires_grad
-
-class IdentityLayer(torch.nn.Module):
- def forward(self, x):
- return x
-
-if __name__ == '__main__':
- exit_if_job_done(expdir)
- main()
- mark_job_done(expdir)
diff --git a/spaces/pharmapsychotic/CLIP-Interrogator/README.md b/spaces/pharmapsychotic/CLIP-Interrogator/README.md
deleted file mode 100644
index f5cbb174f79f0c59d8097f5710d4126667de6a7d..0000000000000000000000000000000000000000
--- a/spaces/pharmapsychotic/CLIP-Interrogator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: CLIP Interrogator
-emoji: 🕵️♂️
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.40.1
-python_version: 3.9.13
-app_file: app.py
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/pietrocagnasso/paper-title-generation/TitleGenerator.py b/spaces/pietrocagnasso/paper-title-generation/TitleGenerator.py
deleted file mode 100644
index a2feae4c7b617d3a8f02cb7c74071e22263d4994..0000000000000000000000000000000000000000
--- a/spaces/pietrocagnasso/paper-title-generation/TitleGenerator.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# For the complete implementation refer to https://github.com/nicolovergaro/DNLP_project
-
-import torch
-
-from transformers import BartForConditionalGeneration, AutoTokenizer, TrainingArguments, Trainer
-
-from TitleGenDataset import TitleGenDataset
-
-
-class SmallTitleGenerator():
- def __init__(self, model_name="pietrocagnasso/bart-paper-titles"):
- """
- The models we fine-tuned for this extensions are available on huggingface:
- - pietrocagnasso/bart-paper-titles: fine-tuned for 1 epoch on all the papers in CS, AI,
- and BIO datasets
- Rouge1: 0.4598, Rouge2: 0.2556, BertScore: 0.8999
- - pietrocagnasso/bart-paper-titles-cs: starting from the general one this model is
- fine-tuned for an additional epoch on the CS dataset
- R1: 0.5584, R2: 0.3817, BS: 0.9228
- - pietrocagnasso/bart-paper-titles-bio: starting from the general one this model is
- fine-tuned for an additional epoch on the BIO dataset
- R1: 0.4597, R2: 0.2540, BS: 0.9006
- - pietrocagnasso/bart-paper-titles-ai: starting from the general one this model is
- fine-tuned for an additional epoch on the AI dataset
- R1: 0.4332, R2: 0.2239, BS: 0.9046
-
- Attributes:
- model_name: string with the name of the model to be used, by default it is the model
- trained on all the datasets
- """
-
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.device = "cuda" if torch.cuda.is_available() else "cpu"
- self.model = BartForConditionalGeneration.from_pretrained(model_name).to(self.device)
-
-
- def generate_title_on_spot(self, text):
- """
- This method can be used to compute the title given a string with highlight followed by the abstract
- """
-
- # tokenize the sentence
- x = self.tokenizer.encode_plus(text,
- padding="max_length",
- max_length=1024,
- truncation=True,
- return_attention_mask=True,
- return_tensors='pt'
- )
- input_ids = x["input_ids"][0]
-
- # predict the title
- outs = self.model.generate(input_ids.unsqueeze(dim=0).to(self.device),
- num_beams=5,
- min_length=3,
- max_length=32
- )
- pred_title = self.tokenizer.decode(outs[0], skip_special_tokens=True)
-
- return pred_title
\ No newline at end of file
diff --git a/spaces/pixiou/bingo/src/components/welcome-screen.tsx b/spaces/pixiou/bingo/src/components/welcome-screen.tsx
deleted file mode 100644
index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000
--- a/spaces/pixiou/bingo/src/components/welcome-screen.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { useBing } from '@/lib/hooks/use-bing'
-
-const exampleMessages = [
- {
- heading: '🧐 提出复杂问题',
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
- },
- {
- heading: '🙌 获取更好的答案',
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
- },
- {
- heading: '🎨 获得创意灵感',
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
- }
-]
-
-export function WelcomeScreen({ setInput }: Pick, 'setInput'>) {
- return (
-
- {exampleMessages.map(example => (
-
setInput(example.message)}>
- {example.heading}
-
-
-
-
“{example.message}”
-
-
-
- ))}
-
- )
-}
diff --git a/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/app.py b/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/app.py
deleted file mode 100644
index 47c14069607bb29e37f9d9d6d311cd7c34803a3c..0000000000000000000000000000000000000000
--- a/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import tensorflow as tf
-import requests
-import gradio as gr
-
-inception_net = tf.keras.applications.MobileNetV2()
-
-# Obteniendo las labels de "https://git.io/JJkYN"
-respuesta = requests.get("https://git.io/JJkYN")
-etiquetas = respuesta.text.split("\n")
-
-def clasifica_imagen(inp):
- inp = inp.reshape((-1, 224, 224, 3))
- inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
- prediction = inception_net.predict(inp).flatten()
- confidences = {etiquetas[i]: float(prediction[i]) for i in range(1000)}
- return confidences
-
-demo = gr.Interface(fn=clasifica_imagen,
- inputs=gr.Image(shape=(224, 224)),
- outputs=gr.Label(num_top_classes=3)
- )
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/plzdontcry/dakubettergpt/src/store/store.ts b/spaces/plzdontcry/dakubettergpt/src/store/store.ts
deleted file mode 100644
index 69aba7de046c16c35570564c6fd0610ceccf3379..0000000000000000000000000000000000000000
--- a/spaces/plzdontcry/dakubettergpt/src/store/store.ts
+++ /dev/null
@@ -1,104 +0,0 @@
-import { StoreApi, create } from 'zustand';
-import { persist } from 'zustand/middleware';
-import { ChatSlice, createChatSlice } from './chat-slice';
-import { InputSlice, createInputSlice } from './input-slice';
-import { AuthSlice, createAuthSlice } from './auth-slice';
-import { ConfigSlice, createConfigSlice } from './config-slice';
-import { PromptSlice, createPromptSlice } from './prompt-slice';
-import { ToastSlice, createToastSlice } from './toast-slice';
-import {
- LocalStorageInterfaceV0ToV1,
- LocalStorageInterfaceV1ToV2,
- LocalStorageInterfaceV2ToV3,
- LocalStorageInterfaceV3ToV4,
- LocalStorageInterfaceV4ToV5,
- LocalStorageInterfaceV5ToV6,
- LocalStorageInterfaceV6ToV7,
- LocalStorageInterfaceV7oV8,
-} from '@type/chat';
-import {
- migrateV0,
- migrateV1,
- migrateV2,
- migrateV3,
- migrateV4,
- migrateV5,
- migrateV6,
- migrateV7,
-} from './migrate';
-
-export type StoreState = ChatSlice &
- InputSlice &
- AuthSlice &
- ConfigSlice &
- PromptSlice &
- ToastSlice;
-
-export type StoreSlice = (
- set: StoreApi['setState'],
- get: StoreApi['getState']
-) => T;
-
-export const createPartializedState = (state: StoreState) => ({
- chats: state.chats,
- currentChatIndex: state.currentChatIndex,
- apiKey: state.apiKey,
- apiEndpoint: state.apiEndpoint,
- theme: state.theme,
- autoTitle: state.autoTitle,
- advancedMode: state.advancedMode,
- prompts: state.prompts,
- defaultChatConfig: state.defaultChatConfig,
- defaultSystemMessage: state.defaultSystemMessage,
- hideMenuOptions: state.hideMenuOptions,
- firstVisit: state.firstVisit,
- hideSideMenu: state.hideSideMenu,
- folders: state.folders,
- enterToSubmit: state.enterToSubmit,
- inlineLatex: state.inlineLatex,
- markdownMode: state.markdownMode,
- totalTokenUsed: state.totalTokenUsed,
- countTotalTokens: state.countTotalTokens,
-});
-
-const useStore = create()(
- persist(
- (set, get) => ({
- ...createChatSlice(set, get),
- ...createInputSlice(set, get),
- ...createAuthSlice(set, get),
- ...createConfigSlice(set, get),
- ...createPromptSlice(set, get),
- ...createToastSlice(set, get),
- }),
- {
- name: 'free-chat-gpt',
- partialize: (state) => createPartializedState(state),
- version: 8,
- migrate: (persistedState, version) => {
- switch (version) {
- case 0:
- migrateV0(persistedState as LocalStorageInterfaceV0ToV1);
- case 1:
- migrateV1(persistedState as LocalStorageInterfaceV1ToV2);
- case 2:
- migrateV2(persistedState as LocalStorageInterfaceV2ToV3);
- case 3:
- migrateV3(persistedState as LocalStorageInterfaceV3ToV4);
- case 4:
- migrateV4(persistedState as LocalStorageInterfaceV4ToV5);
- case 5:
- migrateV5(persistedState as LocalStorageInterfaceV5ToV6);
- case 6:
- migrateV6(persistedState as LocalStorageInterfaceV6ToV7);
- case 7:
- migrateV7(persistedState as LocalStorageInterfaceV7oV8);
- break;
- }
- return persistedState as StoreState;
- },
- }
- )
-);
-
-export default useStore;
diff --git a/spaces/portal/Multidiffusion/back.html b/spaces/portal/Multidiffusion/back.html
deleted file mode 100644
index a127c41a00a46192de164b19e2ca69eb73bced0c..0000000000000000000000000000000000000000
--- a/spaces/portal/Multidiffusion/back.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/profayle/TerrapinTalk/README.md b/spaces/profayle/TerrapinTalk/README.md
deleted file mode 100644
index 157c08e09c40ce170f09cabe72c1d2582a36e725..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: TerrapinTalk
-emoji: 🐢
-app_file: main.py
-sdk: gradio
-sdk_version: 4.0.2
----
-# TerrapinTalk
\ No newline at end of file
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_T_F_A_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_T_F_A_.py
deleted file mode 100644
index e3cf2db2d744cdda880ec1255808f60bc3795c61..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_T_F_A_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from . import asciiTable
-
-
-class table_T_T_F_A_(asciiTable.asciiTable):
- pass
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/implementations/smb.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/implementations/smb.py
deleted file mode 100644
index e8989b0afe5db8f9117adb0a75ca56bfa187cbd8..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/implementations/smb.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-This module contains SMBFileSystem class responsible for handling access to
-Windows Samba network shares by using package smbprotocol
-"""
-
-import datetime
-import uuid
-from stat import S_ISDIR, S_ISLNK
-
-import smbclient
-
-from .. import AbstractFileSystem
-from ..utils import infer_storage_options
-
-# ! pylint: disable=bad-continuation
-
-
-class SMBFileSystem(AbstractFileSystem):
- """Allow reading and writing to Windows and Samba network shares.
-
- When using `fsspec.open()` for getting a file-like object the URI
- should be specified as this format:
- ``smb://workgroup;user:password@server:port/share/folder/file.csv``.
-
- Example::
-
- >>> import fsspec
- >>> with fsspec.open(
- ... 'smb://myuser:mypassword@myserver.com/' 'share/folder/file.csv'
- ... ) as smbfile:
- ... df = pd.read_csv(smbfile, sep='|', header=None)
-
- Note that you need to pass in a valid hostname or IP address for the host
- component of the URL. Do not use the Windows/NetBIOS machine name for the
- host component.
-
- The first component of the path in the URL points to the name of the shared
- folder. Subsequent path components will point to the directory/folder/file.
-
- The URL components ``workgroup`` , ``user``, ``password`` and ``port`` may be
- optional.
-
- .. note::
-
- For working this source require `smbprotocol`_ to be installed, e.g.::
-
- $ pip install smbprotocol
- # or
- # pip install smbprotocol[kerberos]
-
- .. _smbprotocol: https://github.com/jborean93/smbprotocol#requirements
-
- Note: if using this with the ``open`` or ``open_files``, with full URLs,
- there is no way to tell if a path is relative, so all paths are assumed
- to be absolute.
- """
-
- protocol = "smb"
-
- # pylint: disable=too-many-arguments
- def __init__(
- self,
- host,
- port=None,
- username=None,
- password=None,
- timeout=60,
- encrypt=None,
- share_access=None,
- **kwargs,
- ):
- """
- You can use _get_kwargs_from_urls to get some kwargs from
- a reasonable SMB url.
-
- Authentication will be anonymous or integrated if username/password are not
- given.
-
- Parameters
- ----------
- host: str
- The remote server name/ip to connect to
- port: int or None
- Port to connect with. Usually 445, sometimes 139.
- username: str or None
- Username to connect with. Required if Kerberos auth is not being used.
- password: str or None
- User's password on the server, if using username
- timeout: int
- Connection timeout in seconds
- encrypt: bool
- Whether to force encryption or not, once this has been set to True
- the session cannot be changed back to False.
- share_access: str or None
- Specifies the default access applied to file open operations
- performed with this file system object.
- This affects whether other processes can concurrently open a handle
- to the same file.
-
- - None (the default): exclusively locks the file until closed.
- - 'r': Allow other handles to be opened with read access.
- - 'w': Allow other handles to be opened with write access.
- - 'd': Allow other handles to be opened with delete access.
- """
- super().__init__(**kwargs)
- self.host = host
- self.port = port
- self.username = username
- self.password = password
- self.timeout = timeout
- self.encrypt = encrypt
- self.temppath = kwargs.pop("temppath", "")
- self.share_access = share_access
- self._connect()
-
- @property
- def _port(self):
- return 445 if self.port is None else self.port
-
- def _connect(self):
- smbclient.register_session(
- self.host,
- username=self.username,
- password=self.password,
- port=self._port,
- encrypt=self.encrypt,
- connection_timeout=self.timeout,
- )
-
- @classmethod
- def _strip_protocol(cls, path):
- return infer_storage_options(path)["path"]
-
- @staticmethod
- def _get_kwargs_from_urls(path):
- # smb://workgroup;user:password@host:port/share/folder/file.csv
- out = infer_storage_options(path)
- out.pop("path", None)
- out.pop("protocol", None)
- return out
-
- def mkdir(self, path, create_parents=True, **kwargs):
- wpath = _as_unc_path(self.host, path)
- if create_parents:
- smbclient.makedirs(wpath, exist_ok=False, port=self._port, **kwargs)
- else:
- smbclient.mkdir(wpath, port=self._port, **kwargs)
-
- def makedirs(self, path, exist_ok=False):
- if _share_has_path(path):
- wpath = _as_unc_path(self.host, path)
- smbclient.makedirs(wpath, exist_ok=exist_ok, port=self._port)
-
- def rmdir(self, path):
- if _share_has_path(path):
- wpath = _as_unc_path(self.host, path)
- smbclient.rmdir(wpath, port=self._port)
-
- def info(self, path, **kwargs):
- wpath = _as_unc_path(self.host, path)
- stats = smbclient.stat(wpath, port=self._port, **kwargs)
- if S_ISDIR(stats.st_mode):
- stype = "directory"
- elif S_ISLNK(stats.st_mode):
- stype = "link"
- else:
- stype = "file"
- res = {
- "name": path + "/" if stype == "directory" else path,
- "size": stats.st_size,
- "type": stype,
- "uid": stats.st_uid,
- "gid": stats.st_gid,
- "time": stats.st_atime,
- "mtime": stats.st_mtime,
- }
- return res
-
- def created(self, path):
- """Return the created timestamp of a file as a datetime.datetime"""
- wpath = _as_unc_path(self.host, path)
- stats = smbclient.stat(wpath, port=self._port)
- return datetime.datetime.fromtimestamp(stats.st_ctime, tz=datetime.timezone.utc)
-
- def modified(self, path):
- """Return the modified timestamp of a file as a datetime.datetime"""
- wpath = _as_unc_path(self.host, path)
- stats = smbclient.stat(wpath, port=self._port)
- return datetime.datetime.fromtimestamp(stats.st_mtime, tz=datetime.timezone.utc)
-
- def ls(self, path, detail=True, **kwargs):
- unc = _as_unc_path(self.host, path)
- listed = smbclient.listdir(unc, port=self._port, **kwargs)
- dirs = ["/".join([path.rstrip("/"), p]) for p in listed]
- if detail:
- dirs = [self.info(d) for d in dirs]
- return dirs
-
- # pylint: disable=too-many-arguments
- def _open(
- self,
- path,
- mode="rb",
- block_size=-1,
- autocommit=True,
- cache_options=None,
- **kwargs,
- ):
- """
- block_size: int or None
- If 0, no buffering, 1, line buffering, >1, buffer that many bytes
-
- Notes
- -----
- By specifying 'share_access' in 'kwargs' it is possible to override the
- default shared access setting applied in the constructor of this object.
- """
- bls = block_size if block_size is not None and block_size >= 0 else -1
- wpath = _as_unc_path(self.host, path)
- share_access = kwargs.pop("share_access", self.share_access)
- if "w" in mode and autocommit is False:
- temp = _as_temp_path(self.host, path, self.temppath)
- return SMBFileOpener(
- wpath, temp, mode, port=self._port, block_size=bls, **kwargs
- )
- return smbclient.open_file(
- wpath,
- mode,
- buffering=bls,
- share_access=share_access,
- port=self._port,
- **kwargs,
- )
-
- def copy(self, path1, path2, **kwargs):
- """Copy within two locations in the same filesystem"""
- wpath1 = _as_unc_path(self.host, path1)
- wpath2 = _as_unc_path(self.host, path2)
- smbclient.copyfile(wpath1, wpath2, port=self._port, **kwargs)
-
- def _rm(self, path):
- if _share_has_path(path):
- wpath = _as_unc_path(self.host, path)
- stats = smbclient.stat(wpath, port=self._port)
- if S_ISDIR(stats.st_mode):
- smbclient.rmdir(wpath, port=self._port)
- else:
- smbclient.remove(wpath, port=self._port)
-
- def mv(self, path1, path2, recursive=None, maxdepth=None, **kwargs):
- wpath1 = _as_unc_path(self.host, path1)
- wpath2 = _as_unc_path(self.host, path2)
- smbclient.rename(wpath1, wpath2, port=self._port, **kwargs)
-
-
-def _as_unc_path(host, path):
- rpath = path.replace("/", "\\")
- unc = f"\\\\{host}{rpath}"
- return unc
-
-
-def _as_temp_path(host, path, temppath):
- share = path.split("/")[1]
- temp_file = f"/{share}{temppath}/{uuid.uuid4()}"
- unc = _as_unc_path(host, temp_file)
- return unc
-
-
-def _share_has_path(path):
- parts = path.count("/")
- if path.endswith("/"):
- return parts > 2
- return parts > 1
-
-
-class SMBFileOpener:
- """writes to remote temporary file, move on commit"""
-
- def __init__(self, path, temp, mode, port=445, block_size=-1, **kwargs):
- self.path = path
- self.temp = temp
- self.mode = mode
- self.block_size = block_size
- self.kwargs = kwargs
- self.smbfile = None
- self._incontext = False
- self.port = port
- self._open()
-
- def _open(self):
- if self.smbfile is None or self.smbfile.closed:
- self.smbfile = smbclient.open_file(
- self.temp,
- self.mode,
- port=self.port,
- buffering=self.block_size,
- **self.kwargs,
- )
-
- def commit(self):
- """Move temp file to definitive on success."""
- # TODO: use transaction support in SMB protocol
- smbclient.replace(self.temp, self.path, port=self.port)
-
- def discard(self):
- """Remove the temp file on failure."""
- smbclient.remove(self.temp, port=self.port)
-
- def __fspath__(self):
- return self.path
-
- def __iter__(self):
- return self.smbfile.__iter__()
-
- def __getattr__(self, item):
- return getattr(self.smbfile, item)
-
- def __enter__(self):
- self._incontext = True
- return self.smbfile.__enter__()
-
- def __exit__(self, exc_type, exc_value, traceback):
- self._incontext = False
- self.smbfile.__exit__(exc_type, exc_value, traceback)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-647ecb6e.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-647ecb6e.js
deleted file mode 100644
index 18e2857980dc5c9fa93728b60d20026f1e0053cb..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-647ecb6e.js
+++ /dev/null
@@ -1,7 +0,0 @@
-import{a as F,b as I,s as ce,N as me,t as c,P as _e,g as Ue,T as E,p as Qe,h as J,E as v,e as se,j as Ze,k as Ge,l as Ve,m as Ke,f as Je,i as Ye,n as We,o as et,q as ne,r as tt}from"./Index-9bf8add7.js";import{html as rt}from"./index-c1421b46.js";import"./index-50ad4c77.js";import"./svelte/svelte.js";import"./Button-8eeccca1.js";import"./Index-c74a8b7c.js";import"./Copy-1b5c0932.js";import"./Download-696bd40c.js";import"./BlockLabel-e3970ebb.js";import"./Empty-eeaba2d1.js";import"./Example-e03fb3b4.js";import"./index-b5ab13e3.js";import"./index-5fd0a2c9.js";import"./index-ef54ac87.js";class X{constructor(e,r,s,n,i,o,a){this.type=e,this.value=r,this.from=s,this.hash=n,this.end=i,this.children=o,this.positions=a,this.hashProp=[[I.contextHash,n]]}static create(e,r,s,n,i){let o=n+(n<<8)+e+(r<<4)|0;return new X(e,r,s,o,i,[],[])}addChild(e,r){e.prop(I.contextHash)!=this.hash&&(e=new E(e.type,e.children,e.positions,e.length,this.hashProp)),this.children.push(e),this.positions.push(r)}toTree(e,r=this.end){let s=this.children.length-1;return s>=0&&(r=Math.max(r,this.positions[s]+this.children[s].length+this.from)),new E(e.types[this.type],this.children,this.positions,r-this.from).balance({makeTree:(i,o,a)=>new E(F.none,i,o,a,this.hashProp)})}}var f;(function(t){t[t.Document=1]="Document",t[t.CodeBlock=2]="CodeBlock",t[t.FencedCode=3]="FencedCode",t[t.Blockquote=4]="Blockquote",t[t.HorizontalRule=5]="HorizontalRule",t[t.BulletList=6]="BulletList",t[t.OrderedList=7]="OrderedList",t[t.ListItem=8]="ListItem",t[t.ATXHeading1=9]="ATXHeading1",t[t.ATXHeading2=10]="ATXHeading2",t[t.ATXHeading3=11]="ATXHeading3",t[t.ATXHeading4=12]="ATXHeading4",t[t.ATXHeading5=13]="ATXHeading5",t[t.ATXHeading6=14]="ATXHeading6",t[t.SetextHeading1=15]="SetextHeading1",t[t.SetextHeading2=16]="SetextHeading2",t[t.HTMLBlock=17]="HTMLBlock",t[t.LinkReference=18]="LinkReference",t[t.Paragraph=19]="Paragraph",t[t.CommentBlock=20]="CommentBlock",t[t.ProcessingInstructionBlock=21]="ProcessingInstructionBlock",t[t.Escape=22]="Escape",t[t.Entity=23]="Entity",t[t.HardBreak=24]="HardBreak",t[t.Emphasis=25]="Emphasis",t[t.StrongEmphasis=26]="StrongEmphasis",t[t.Link=27]="Link",t[t.Image=28]="Image",t[t.InlineCode=29]="InlineCode",t[t.HTMLTag=30]="HTMLTag",t[t.Comment=31]="Comment",t[t.ProcessingInstruction=32]="ProcessingInstruction",t[t.URL=33]="URL",t[t.HeaderMark=34]="HeaderMark",t[t.QuoteMark=35]="QuoteMark",t[t.ListMark=36]="ListMark",t[t.LinkMark=37]="LinkMark",t[t.EmphasisMark=38]="EmphasisMark",t[t.CodeMark=39]="CodeMark",t[t.CodeText=40]="CodeText",t[t.CodeInfo=41]="CodeInfo",t[t.LinkTitle=42]="LinkTitle",t[t.LinkLabel=43]="LinkLabel"})(f||(f={}));class st{constructor(e,r){this.start=e,this.content=r,this.marks=[],this.parsers=[]}}class nt{constructor(){this.text="",this.baseIndent=0,this.basePos=0,this.depth=0,this.markers=[],this.pos=0,this.indent=0,this.next=-1}forward(){this.basePos>this.pos&&this.forwardInner()}forwardInner(){let e=this.skipSpace(this.basePos);this.indent=this.countIndent(e,this.pos,this.indent),this.pos=e,this.next=e==this.text.length?-1:this.text.charCodeAt(e)}skipSpace(e){return N(this.text,e)}reset(e){for(this.text=e,this.baseIndent=this.basePos=this.pos=this.indent=0,this.forwardInner(),this.depth=1;this.markers.length;)this.markers.pop()}moveBase(e){this.basePos=e,this.baseIndent=this.countIndent(e,this.pos,this.indent)}moveBaseColumn(e){this.baseIndent=e,this.basePos=this.findColumn(e)}addMarker(e){this.markers.push(e)}countIndent(e,r=0,s=0){for(let n=r;n=e.stack[r.depth+1].value+r.baseIndent)return!0;if(r.indent>=r.baseIndent+4)return!1;let s=(t.type==f.OrderedList?ee:W)(r,e,!1);return s>0&&(t.type!=f.BulletList||Y(r,e,!1)<0)&&r.text.charCodeAt(r.pos+s-1)==t.value}const ge={[f.Blockquote](t,e,r){return r.next!=62?!1:(r.markers.push(m(f.QuoteMark,e.lineStart+r.pos,e.lineStart+r.pos+1)),r.moveBase(r.pos+(C(r.text.charCodeAt(r.pos+1))?2:1)),t.end=e.lineStart+r.text.length,!0)},[f.ListItem](t,e,r){return r.indent-1?!1:(r.moveBaseColumn(r.baseIndent+t.value),!0)},[f.OrderedList]:ie,[f.BulletList]:ie,[f.Document](){return!0}};function C(t){return t==32||t==9||t==10||t==13}function N(t,e=0){for(;er&&C(t.charCodeAt(e-1));)e--;return e}function ke(t){if(t.next!=96&&t.next!=126)return-1;let e=t.pos+1;for(;e-1&&t.depth==e.stack.length||s<3?-1:1}function be(t,e){for(let r=t.stack.length-1;r>=0;r--)if(t.stack[r].type==e)return!0;return!1}function W(t,e,r){return(t.next==45||t.next==43||t.next==42)&&(t.pos==t.text.length-1||C(t.text.charCodeAt(t.pos+1)))&&(!r||be(e,f.BulletList)||t.skipSpace(t.pos+2)=48&&n<=57;){s++;if(s==t.text.length)return-1;n=t.text.charCodeAt(s)}return s==t.pos||s>t.pos+9||n!=46&&n!=41||st.pos+1||t.next!=49)?-1:s+1-t.pos}function Se(t){if(t.next!=35)return-1;let e=t.pos+1;for(;e6?-1:r}function we(t){if(t.next!=45&&t.next!=61||t.indent>=t.baseIndent+4)return-1;let e=t.pos+1;for(;e/,Ae=/\?>/,Z=[[/^<(?:script|pre|style)(?:\s|>|$)/i,/<\/(?:script|pre|style)>/i],[/^\s*/i.exec(s);if(i)return t.append(m(f.Comment,r,r+1+i[0].length));let o=/^\?[^]*?\?>/.exec(s);if(o)return t.append(m(f.ProcessingInstruction,r,r+1+o[0].length));let a=/^(?:![A-Z][^]*?>|!\[CDATA\[[^]*?\]\]>|\/\s*[a-zA-Z][\w-]*\s*>|\s*[a-zA-Z][\w-]*(\s+[a-zA-Z:_][\w-.:]*(?:\s*=\s*(?:[^\s"'=<>`]+|'[^']*'|"[^"]*"))?)*\s*(\/\s*)?>)/.exec(s);return a?t.append(m(f.HTMLTag,r,r+1+a[0].length)):-1},Emphasis(t,e,r){if(e!=95&&e!=42)return-1;let s=r+1;for(;t.char(s)==e;)s++;let n=t.slice(r-1,r),i=t.slice(s,s+1),o=R.test(n),a=R.test(i),l=/\s|^$/.test(n),h=/\s|^$/.test(i),u=!h&&(!a||l||o),p=!l&&(!o||h||a),d=u&&(e==42||!p||o),L=p&&(e==42||!u||a);return t.append(new A(e==95?He:Pe,r,s,(d?1:0)|(L?2:0)))},HardBreak(t,e,r){if(e==92&&t.char(r+1)==10)return t.append(m(f.HardBreak,r,r+2));if(e==32){let s=r+1;for(;t.char(s)==32;)s++;if(t.char(s)==10&&s>=r+2)return t.append(m(f.HardBreak,r,s+1))}return-1},Link(t,e,r){return e==91?t.append(new A(P,r,r+1,1)):-1},Image(t,e,r){return e==33&&t.char(r+1)==91?t.append(new A(le,r,r+2,1)):-1},LinkEnd(t,e,r){if(e!=93)return-1;for(let s=t.parts.length-1;s>=0;s--){let n=t.parts[s];if(n instanceof A&&(n.type==P||n.type==le)){if(!n.side||t.skipSpace(n.to)==r&&!/[(\[]/.test(t.slice(r+1,r+2)))return t.parts[s]=null,-1;let i=t.takeContent(s),o=t.parts[s]=ut(t,i,n.type==P?f.Link:f.Image,n.from,r+1);if(n.type==P)for(let a=0;ae?m(f.URL,e+r,i+r):i==t.length?null:!1}}function Ne(t,e,r){let s=t.charCodeAt(e);if(s!=39&&s!=34&&s!=40)return!1;let n=s==40?41:s;for(let i=e+1,o=!1;i=this.end?-1:this.text.charCodeAt(e-this.offset)}get end(){return this.offset+this.text.length}slice(e,r){return this.text.slice(e-this.offset,r-this.offset)}append(e){return this.parts.push(e),e.to}addDelimiter(e,r,s,n,i){return this.append(new A(e,r,s,(n?1:0)|(i?2:0)))}addElement(e){return this.append(e)}resolveMarkers(e){for(let s=e;s=e;l--){let g=this.parts[l];if(g instanceof A&&g.side&1&&g.type==n.type&&!(i&&(n.side&1||g.side&2)&&(g.to-g.from+o)%3==0&&((g.to-g.from)%3||o%3))){a=g;break}}if(!a)continue;let h=n.type.resolve,u=[],p=a.from,d=n.to;if(i){let g=Math.min(2,a.to-a.from,o);p=a.to-g,d=n.from+g,h=g==1?"Emphasis":"StrongEmphasis"}a.type.mark&&u.push(this.elt(a.type.mark,p,a.to));for(let g=l+1;g=0;r--){let s=this.parts[r];if(s instanceof A&&s.type==e)return r}return null}takeContent(e){let r=this.resolveMarkers(e);return this.parts.length=e,r}skipSpace(e){return N(this.text,e-this.offset)+this.offset}elt(e,r,s,n){return typeof e=="string"?m(this.parser.getNodeType(e),r,s,n):new Me(e,r)}}function V(t,e){if(!e.length)return t;if(!t.length)return e;let r=t.slice(),s=0;for(let n of e){for(;s(e?e-1:0))return!1;if(this.fragmentEnd<0){let i=this.fragment.to;for(;i>0&&this.input.read(i-1,i)!=`
-`;)i--;this.fragmentEnd=i?i-1:0}let s=this.cursor;s||(s=this.cursor=this.fragment.tree.cursor(),s.firstChild());let n=e+this.fragment.offset;for(;s.to<=n;)if(!s.parent())return!1;for(;;){if(s.from>=n)return this.fragment.from<=r;if(!s.childAfter(n))return!1}}matches(e){let r=this.cursor.tree;return r&&r.prop(I.contextHash)==e}takeNodes(e){let r=this.cursor,s=this.fragment.offset,n=this.fragmentEnd-(this.fragment.openEnd?1:0),i=e.absoluteLineStart,o=i,a=e.block.children.length,l=o,h=a;for(;;){if(r.to-s>n){if(r.type.isAnonymous&&r.firstChild())continue;break}if(e.dontInject.add(r.tree),e.addNode(r.tree,r.from-s),r.type.is("Block")&&(pt.indexOf(r.type.id)<0?(o=r.to-s,a=e.block.children.length):(o=l,a=h,l=r.to-s,h=e.block.children.length)),!r.nextSibling())break}for(;e.block.children.length>a;)e.block.children.pop(),e.block.positions.pop();return o-i}}const mt=ce({"Blockquote/...":c.quote,HorizontalRule:c.contentSeparator,"ATXHeading1/... SetextHeading1/...":c.heading1,"ATXHeading2/... SetextHeading2/...":c.heading2,"ATXHeading3/...":c.heading3,"ATXHeading4/...":c.heading4,"ATXHeading5/...":c.heading5,"ATXHeading6/...":c.heading6,"Comment CommentBlock":c.comment,Escape:c.escape,Entity:c.character,"Emphasis/...":c.emphasis,"StrongEmphasis/...":c.strong,"Link/... Image/...":c.link,"OrderedList/... BulletList/...":c.list,"BlockQuote/...":c.quote,"InlineCode CodeText":c.monospace,URL:c.url,"HeaderMark HardBreak QuoteMark ListMark LinkMark EmphasisMark CodeMark":c.processingInstruction,"CodeInfo LinkLabel":c.labelName,LinkTitle:c.string,Paragraph:c.content}),gt=new j(new me(Ee).extend(mt),Object.keys(z).map(t=>z[t]),Object.keys(z).map(t=>at[t]),Object.keys(z),lt,ge,Object.keys(_).map(t=>_[t]),Object.keys(_),[]);function kt(t,e,r){let s=[];for(let n=t.firstChild,i=e;;n=n.nextSibling){let o=n?n.from:r;if(o>i&&s.push({from:i,to:o}),!n)break;i=n.to}return s}function Lt(t){let{codeParser:e,htmlParser:r}=t;return{wrap:Qe((n,i)=>{let o=n.type.id;if(e&&(o==f.CodeBlock||o==f.FencedCode)){let a="";if(o==f.FencedCode){let h=n.node.getChild(f.CodeInfo);h&&(a=i.read(h.from,h.to))}let l=e(a);if(l)return{parser:l,overlay:h=>h.type.id==f.CodeText}}else if(r&&(o==f.HTMLBlock||o==f.HTMLTag))return{parser:r,overlay:kt(n.node,n.from,n.to)};return null})}}const bt={resolve:"Strikethrough",mark:"StrikethroughMark"},St={defineNodes:[{name:"Strikethrough",style:{"Strikethrough/...":c.strikethrough}},{name:"StrikethroughMark",style:c.processingInstruction}],parseInline:[{name:"Strikethrough",parse(t,e,r){if(e!=126||t.char(r+1)!=126||t.char(r+2)==126)return-1;let s=t.slice(r-1,r),n=t.slice(r+2,r+3),i=/\s|^$/.test(s),o=/\s|^$/.test(n),a=R.test(s),l=R.test(n);return t.addDelimiter(bt,r,r+2,!o&&(!l||i||a),!i&&(!a||o||l))},after:"Emphasis"}]};function y(t,e,r=0,s,n=0){let i=0,o=!0,a=-1,l=-1,h=!1,u=()=>{s.push(t.elt("TableCell",n+a,n+l,t.parser.parseInline(e.slice(a,l),n+a)))};for(let p=r;p-1)&&i++,o=!1,s&&(a>-1&&u(),s.push(t.elt("TableDelimiter",p+n,p+n+1))),a=l=-1):(h||d!=32&&d!=9)&&(a<0&&(a=p),l=p+1),h=!h&&d==92}return a>-1&&(i++,s&&u()),i}function fe(t,e){for(let r=e;rn instanceof ue)||!fe(e.text,e.basePos))return!1;let s=t.scanLine(t.absoluteLineEnd+1).text;return Oe.test(s)&&y(t,e.text,e.basePos)==y(t,s,e.basePos)},before:"SetextHeading"}]};class Ct{nextLine(){return!1}finish(e,r){return e.addLeafElement(r,e.elt("Task",r.start,r.start+r.content.length,[e.elt("TaskMarker",r.start,r.start+3),...e.parser.parseInline(r.content.slice(3),r.start+3)])),!0}}const At={defineNodes:[{name:"Task",block:!0,style:c.list},{name:"TaskMarker",style:c.atom}],parseBlock:[{name:"TaskList",leaf(t,e){return/^\[[ xX]\]/.test(e.content)&&t.parentType().name=="ListItem"?new Ct:null},after:"SetextHeading"}]},xt=[wt,At,St];function Re(t,e,r){return(s,n,i)=>{if(n!=t||s.char(i+1)==t)return-1;let o=[s.elt(r,i,i+1)];for(let a=i+1;a"}}),Te=new I,De=gt.configure({props:[Je.add(t=>!t.is("Block")||t.is("Document")||K(t)!=null?void 0:(e,r)=>({from:r.doc.lineAt(e.from).to,to:e.to})),Te.add(K),Ye.add({Document:()=>null}),We.add({Document:ze})]});function K(t){let e=/^(?:ATX|Setext)Heading(\d)$/.exec(t.name);return e?+e[1]:void 0}function Mt(t,e){let r=t;for(;;){let s=r.nextSibling,n;if(!s||(n=K(s.type))!=null&&n<=e)break;r=s}return r.to}const Ht=et.of((t,e,r)=>{for(let s=J(t).resolveInner(r,-1);s&&!(s.fromr)return{from:r,to:i}}return null});function te(t){return new Ve(ze,t,[Ht],"markdown")}const Pt=te(De),vt=De.configure([xt,Et,Bt,It]),Xe=te(vt);function Nt(t,e){return r=>{if(r&&t){let s=null;if(r=/\S*/.exec(r)[0],typeof t=="function"?s=t(r):s=ne.matchLanguageName(t,r,!0),s instanceof ne)return s.support?s.support.language.parser:tt.getSkippingParser(s.load());if(s)return s.parser}return e?e.parser:null}}class D{constructor(e,r,s,n,i,o,a){this.node=e,this.from=r,this.to=s,this.spaceBefore=n,this.spaceAfter=i,this.type=o,this.item=a}blank(e,r=!0){let s=this.spaceBefore+(this.node.name=="Blockquote"?">":"");if(e!=null){for(;s.length0;n--)s+=" ";return s+(r?this.spaceAfter:"")}}marker(e,r){let s=this.node.name=="OrderedList"?String(+je(this.item,e)[2]+r):"";return this.spaceBefore+s+this.type+this.spaceAfter}}function Fe(t,e){let r=[];for(let n=t;n&&n.name!="Document";n=n.parent)(n.name=="ListItem"||n.name=="Blockquote"||n.name=="FencedCode")&&r.push(n);let s=[];for(let n=r.length-1;n>=0;n--){let i=r[n],o,a=e.lineAt(i.from),l=i.from-a.from;if(i.name=="FencedCode")s.push(new D(i,l,l,"","","",null));else if(i.name=="Blockquote"&&(o=/^[ \t]*>( ?)/.exec(a.text.slice(l))))s.push(new D(i,l,l+o[0].length,"",o[1],">",null));else if(i.name=="ListItem"&&i.parent.name=="OrderedList"&&(o=/^([ \t]*)\d+([.)])([ \t]*)/.exec(a.text.slice(l)))){let h=o[3],u=o[0].length;h.length>=4&&(h=h.slice(0,h.length-4),u-=4),s.push(new D(i.parent,l,l+u,o[1],h,o[2],i))}else if(i.name=="ListItem"&&i.parent.name=="BulletList"&&(o=/^([ \t]*)([-+*])([ \t]{1,4}\[[ xX]\])?([ \t]+)/.exec(a.text.slice(l)))){let h=o[4],u=o[0].length;h.length>4&&(h=h.slice(0,h.length-4),u-=4);let p=o[2];o[3]&&(p+=o[3].replace(/[xX]/," ")),s.push(new D(i.parent,l,l+u,o[1],h,p,i))}}return s}function je(t,e){return/^(\s*)(\d+)(?=[.)])/.exec(e.sliceString(t.from,t.from+10))}function U(t,e,r,s=0){for(let n=-1,i=t;;){if(i.name=="ListItem"){let a=je(i,e),l=+a[2];if(n>=0){if(l!=n+1)return;r.push({from:i.from+a[1].length,to:i.from+a[0].length,insert:String(n+2+s)})}n=l}let o=i.nextSibling;if(!o)break;i=o}}const yt=({state:t,dispatch:e})=>{let r=J(t),{doc:s}=t,n=null,i=t.changeByRange(o=>{if(!o.empty||!Xe.isActiveAt(t,o.from))return n={range:o};let a=o.from,l=s.lineAt(a),h=Fe(r.resolveInner(a,-1),s);for(;h.length&&h[h.length-1].from>a-l.from;)h.pop();if(!h.length)return n={range:o};let u=h[h.length-1];if(u.to-u.spaceAfter.length>a-l.from)return n={range:o};let p=a>=u.to-u.spaceAfter.length&&!/\S/.test(l.text.slice(u.to));if(u.item&&p)if(u.node.firstChild.to>=a||l.from>0&&!/[^\s>]/.test(s.lineAt(l.from-1).text)){let k=h.length>1?h[h.length-2]:null,b,w="";k&&k.item?(b=l.from+k.from,w=k.marker(s,1)):b=l.from+(k?k.to:0);let x=[{from:b,to:a,insert:w}];return u.node.name=="OrderedList"&&U(u.item,s,x,-2),k&&k.node.name=="OrderedList"&&U(k.item,s,x),{range:v.cursor(b+w.length),changes:x}}else{let k="";for(let b=0,w=h.length-2;b<=w;b++)k+=h[b].blank(b\s*$/.exec(k.text);if(b&&b.index==u.from){let w=t.changes([{from:k.from+b.index,to:k.to},{from:l.from+u.from,to:l.to}]);return{range:o.map(w),changes:w}}}let d=[];u.node.name=="OrderedList"&&U(u.item,s,d);let L=u.item&&u.item.from]*/.exec(l.text)[0].length>=u.to)for(let k=0,b=h.length-1;k<=b;k++)S+=k==b&&!L?h[k].marker(s,1):h[k].blank(kl.from&&/\s/.test(l.text.charAt(g-l.from-1));)g--;return S=t.lineBreak+S,d.push({from:g,to:a,insert:S}),{range:v.cursor(g+S.length),changes:d}});return n?!1:(e(t.update(i,{scrollIntoView:!0,userEvent:"input"})),!0)};function de(t){return t.name=="QuoteMark"||t.name=="ListMark"}function Ot(t,e){let r=t.resolveInner(e,-1),s=e;de(r)&&(s=r.from,r=r.parent);for(let n;n=r.childBefore(s);)if(de(n))s=n.from;else if(n.name=="OrderedList"||n.name=="BulletList")r=n.lastChild,s=r.to;else break;return r}const Rt=({state:t,dispatch:e})=>{let r=J(t),s=null,n=t.changeByRange(i=>{let o=i.from,{doc:a}=t;if(i.empty&&Xe.isActiveAt(t,i.from)){let l=a.lineAt(o),h=Fe(Ot(r,o),a);if(h.length){let u=h[h.length-1],p=u.to-u.spaceAfter.length+(u.spaceAfter?1:0);if(o-l.from>p&&!/\S/.test(l.text.slice(p,o-l.from)))return{range:v.cursor(l.from+p),changes:{from:l.from+p,to:o}};if(o-l.from==p){let d=l.from+u.from;if(u.item&&u.node.from 0
- mpl.rcParams['animation.ffmpeg_path'] = "not_available_ever_xxxx"
- assert not animation.writers.is_available("ffmpeg")
- # something guaranteed to be available in path and exits immediately
- bin = "true" if sys.platform != 'win32' else "where"
- mpl.rcParams['animation.ffmpeg_path'] = bin
- assert animation.writers.is_available("ffmpeg")
-
-
-@pytest.mark.parametrize(
- "method_name",
- [pytest.param("to_html5_video", marks=pytest.mark.skipif(
- not animation.writers.is_available(mpl.rcParams["animation.writer"]),
- reason="animation writer not installed")),
- "to_jshtml"])
-@pytest.mark.parametrize('anim', [dict(frames=1)], indirect=['anim'])
-def test_embed_limit(method_name, caplog, tmpdir, anim):
- caplog.set_level("WARNING")
- with tmpdir.as_cwd():
- with mpl.rc_context({"animation.embed_limit": 1e-6}): # ~1 byte.
- getattr(anim, method_name)()
- assert len(caplog.records) == 1
- record, = caplog.records
- assert (record.name == "matplotlib.animation"
- and record.levelname == "WARNING")
-
-
-@pytest.mark.parametrize(
- "method_name",
- [pytest.param("to_html5_video", marks=pytest.mark.skipif(
- not animation.writers.is_available(mpl.rcParams["animation.writer"]),
- reason="animation writer not installed")),
- "to_jshtml"])
-@pytest.mark.parametrize('anim', [dict(frames=1)], indirect=['anim'])
-def test_cleanup_temporaries(method_name, tmpdir, anim):
- with tmpdir.as_cwd():
- getattr(anim, method_name)()
- assert list(Path(str(tmpdir)).iterdir()) == []
-
-
-@pytest.mark.skipif(shutil.which("/bin/sh") is None, reason="requires a POSIX OS")
-def test_failing_ffmpeg(tmpdir, monkeypatch, anim):
- """
- Test that we correctly raise a CalledProcessError when ffmpeg fails.
-
- To do so, mock ffmpeg using a simple executable shell script that
- succeeds when called with no arguments (so that it gets registered by
- `isAvailable`), but fails otherwise, and add it to the $PATH.
- """
- with tmpdir.as_cwd():
- monkeypatch.setenv("PATH", ".:" + os.environ["PATH"])
- exe_path = Path(str(tmpdir), "ffmpeg")
- exe_path.write_bytes(b"#!/bin/sh\n[[ $@ -eq 0 ]]\n")
- os.chmod(exe_path, 0o755)
- with pytest.raises(subprocess.CalledProcessError):
- anim.save("test.mpeg")
-
-
-@pytest.mark.parametrize("cache_frame_data", [False, True])
-def test_funcanimation_cache_frame_data(cache_frame_data):
- fig, ax = plt.subplots()
- line, = ax.plot([], [])
-
- class Frame(dict):
- # this subclassing enables to use weakref.ref()
- pass
-
- def init():
- line.set_data([], [])
- return line,
-
- def animate(frame):
- line.set_data(frame['x'], frame['y'])
- return line,
-
- frames_generated = []
-
- def frames_generator():
- for _ in range(5):
- x = np.linspace(0, 10, 100)
- y = np.random.rand(100)
-
- frame = Frame(x=x, y=y)
-
- # collect weak references to frames
- # to validate their references later
- frames_generated.append(weakref.ref(frame))
-
- yield frame
-
- MAX_FRAMES = 100
- anim = animation.FuncAnimation(fig, animate, init_func=init,
- frames=frames_generator,
- cache_frame_data=cache_frame_data,
- save_count=MAX_FRAMES)
-
- writer = NullMovieWriter()
- anim.save('unused.null', writer=writer)
- assert len(frames_generated) == 5
- np.testing.break_cycles()
- for f in frames_generated:
- # If cache_frame_data is True, then the weakref should be alive;
- # if cache_frame_data is False, then the weakref should be dead (None).
- assert (f() is None) != cache_frame_data
-
-
-@pytest.mark.parametrize('return_value', [
- # User forgot to return (returns None).
- None,
- # User returned a string.
- 'string',
- # User returned an int.
- 1,
- # User returns a sequence of other objects, e.g., string instead of Artist.
- ('string', ),
- # User forgot to return a sequence (handled in `animate` below.)
- 'artist',
-])
-def test_draw_frame(return_value):
- # test _draw_frame method
-
- fig, ax = plt.subplots()
- line, = ax.plot([])
-
- def animate(i):
- # general update func
- line.set_data([0, 1], [0, i])
- if return_value == 'artist':
- # *not* a sequence
- return line
- else:
- return return_value
-
- with pytest.raises(RuntimeError):
- animation.FuncAnimation(
- fig, animate, blit=True, cache_frame_data=False
- )
-
-
-def test_exhausted_animation(tmpdir):
- fig, ax = plt.subplots()
-
- def update(frame):
- return []
-
- anim = animation.FuncAnimation(
- fig, update, frames=iter(range(10)), repeat=False,
- cache_frame_data=False
- )
-
- with tmpdir.as_cwd():
- anim.save("test.gif", writer='pillow')
-
- with pytest.warns(UserWarning, match="exhausted"):
- anim._start()
-
-
-def test_no_frame_warning(tmpdir):
- fig, ax = plt.subplots()
-
- def update(frame):
- return []
-
- anim = animation.FuncAnimation(
- fig, update, frames=[], repeat=False,
- cache_frame_data=False
- )
-
- with pytest.warns(UserWarning, match="exhausted"):
- anim._start()
-
-
-@check_figures_equal(extensions=["png"])
-def test_animation_frame(tmpdir, fig_test, fig_ref):
- # Test the expected image after iterating through a few frames
- # we save the animation to get the iteration because we are not
- # in an interactive framework.
- ax = fig_test.add_subplot()
- ax.set_xlim(0, 2 * np.pi)
- ax.set_ylim(-1, 1)
- x = np.linspace(0, 2 * np.pi, 100)
- line, = ax.plot([], [])
-
- def init():
- line.set_data([], [])
- return line,
-
- def animate(i):
- line.set_data(x, np.sin(x + i / 100))
- return line,
-
- anim = animation.FuncAnimation(
- fig_test, animate, init_func=init, frames=5,
- blit=True, repeat=False)
- with tmpdir.as_cwd():
- anim.save("test.gif")
-
- # Reference figure without animation
- ax = fig_ref.add_subplot()
- ax.set_xlim(0, 2 * np.pi)
- ax.set_ylim(-1, 1)
-
- # 5th frame's data
- ax.plot(x, np.sin(x + 4 / 100))
-
-
-@pytest.mark.parametrize('anim', [dict(klass=dict)], indirect=['anim'])
-def test_save_count_override_warnings_has_length(anim):
-
- save_count = 5
- frames = list(range(2))
- match_target = (
- f'You passed in an explicit {save_count=} '
- "which is being ignored in favor of "
- f"{len(frames)=}."
- )
-
- with pytest.warns(UserWarning, match=re.escape(match_target)):
- anim = animation.FuncAnimation(
- **{**anim, 'frames': frames, 'save_count': save_count}
- )
- assert anim._save_count == len(frames)
- anim._init_draw()
-
-
-@pytest.mark.parametrize('anim', [dict(klass=dict)], indirect=['anim'])
-def test_save_count_override_warnings_scaler(anim):
- save_count = 5
- frames = 7
- match_target = (
- f'You passed in an explicit {save_count=} ' +
- "which is being ignored in favor of " +
- f"{frames=}."
- )
-
- with pytest.warns(UserWarning, match=re.escape(match_target)):
- anim = animation.FuncAnimation(
- **{**anim, 'frames': frames, 'save_count': save_count}
- )
-
- assert anim._save_count == frames
- anim._init_draw()
-
-
-@pytest.mark.parametrize('anim', [dict(klass=dict)], indirect=['anim'])
-def test_disable_cache_warning(anim):
- cache_frame_data = True
- frames = iter(range(5))
- match_target = (
- f"{frames=!r} which we can infer the length of, "
- "did not pass an explicit *save_count* "
- f"and passed {cache_frame_data=}. To avoid a possibly "
- "unbounded cache, frame data caching has been disabled. "
- "To suppress this warning either pass "
- "`cache_frame_data=False` or `save_count=MAX_FRAMES`."
- )
- with pytest.warns(UserWarning, match=re.escape(match_target)):
- anim = animation.FuncAnimation(
- **{**anim, 'cache_frame_data': cache_frame_data, 'frames': frames}
- )
- assert anim._cache_frame_data is False
- anim._init_draw()
-
-
-def test_movie_writer_invalid_path(anim):
- if sys.platform == "win32":
- match_str = re.escape("[WinError 3] The system cannot find the path specified:")
- else:
- match_str = re.escape("[Errno 2] No such file or directory: '/foo")
- with pytest.raises(FileNotFoundError, match=match_str):
- anim.save("/foo/bar/aardvark/thiscannotreallyexist.mp4",
- writer=animation.FFMpegFileWriter())
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_half.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_half.py
deleted file mode 100644
index ca849ad52ead1430a24b72c97eeabb084a2826dc..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_half.py
+++ /dev/null
@@ -1,563 +0,0 @@
-import platform
-import pytest
-
-import numpy as np
-from numpy import uint16, float16, float32, float64
-from numpy.testing import assert_, assert_equal, _OLD_PROMOTION, IS_WASM
-
-
-def assert_raises_fpe(strmatch, callable, *args, **kwargs):
- try:
- callable(*args, **kwargs)
- except FloatingPointError as exc:
- assert_(str(exc).find(strmatch) >= 0,
- "Did not raise floating point %s error" % strmatch)
- else:
- assert_(False,
- "Did not raise floating point %s error" % strmatch)
-
-class TestHalf:
- def setup_method(self):
- # An array of all possible float16 values
- self.all_f16 = np.arange(0x10000, dtype=uint16)
- self.all_f16.dtype = float16
- self.all_f32 = np.array(self.all_f16, dtype=float32)
- self.all_f64 = np.array(self.all_f16, dtype=float64)
-
- # An array of all non-NaN float16 values, in sorted order
- self.nonan_f16 = np.concatenate(
- (np.arange(0xfc00, 0x7fff, -1, dtype=uint16),
- np.arange(0x0000, 0x7c01, 1, dtype=uint16)))
- self.nonan_f16.dtype = float16
- self.nonan_f32 = np.array(self.nonan_f16, dtype=float32)
- self.nonan_f64 = np.array(self.nonan_f16, dtype=float64)
-
- # An array of all finite float16 values, in sorted order
- self.finite_f16 = self.nonan_f16[1:-1]
- self.finite_f32 = self.nonan_f32[1:-1]
- self.finite_f64 = self.nonan_f64[1:-1]
-
- def test_half_conversions(self):
- """Checks that all 16-bit values survive conversion
- to/from 32-bit and 64-bit float"""
- # Because the underlying routines preserve the NaN bits, every
- # value is preserved when converting to/from other floats.
-
- # Convert from float32 back to float16
- b = np.array(self.all_f32, dtype=float16)
- assert_equal(self.all_f16.view(dtype=uint16),
- b.view(dtype=uint16))
-
- # Convert from float64 back to float16
- b = np.array(self.all_f64, dtype=float16)
- assert_equal(self.all_f16.view(dtype=uint16),
- b.view(dtype=uint16))
-
- # Convert float16 to longdouble and back
- # This doesn't necessarily preserve the extra NaN bits,
- # so exclude NaNs.
- a_ld = np.array(self.nonan_f16, dtype=np.longdouble)
- b = np.array(a_ld, dtype=float16)
- assert_equal(self.nonan_f16.view(dtype=uint16),
- b.view(dtype=uint16))
-
- # Check the range for which all integers can be represented
- i_int = np.arange(-2048, 2049)
- i_f16 = np.array(i_int, dtype=float16)
- j = np.array(i_f16, dtype=int)
- assert_equal(i_int, j)
-
- @pytest.mark.parametrize("string_dt", ["S", "U"])
- def test_half_conversion_to_string(self, string_dt):
- # Currently uses S/U32 (which is sufficient for float32)
- expected_dt = np.dtype(f"{string_dt}32")
- assert np.promote_types(np.float16, string_dt) == expected_dt
- assert np.promote_types(string_dt, np.float16) == expected_dt
-
- arr = np.ones(3, dtype=np.float16).astype(string_dt)
- assert arr.dtype == expected_dt
-
- @pytest.mark.parametrize("string_dt", ["S", "U"])
- def test_half_conversion_from_string(self, string_dt):
- string = np.array("3.1416", dtype=string_dt)
- assert string.astype(np.float16) == np.array(3.1416, dtype=np.float16)
-
- @pytest.mark.parametrize("offset", [None, "up", "down"])
- @pytest.mark.parametrize("shift", [None, "up", "down"])
- @pytest.mark.parametrize("float_t", [np.float32, np.float64])
- @np._no_nep50_warning()
- def test_half_conversion_rounding(self, float_t, shift, offset):
- # Assumes that round to even is used during casting.
- max_pattern = np.float16(np.finfo(np.float16).max).view(np.uint16)
-
- # Test all (positive) finite numbers, denormals are most interesting
- # however:
- f16s_patterns = np.arange(0, max_pattern+1, dtype=np.uint16)
- f16s_float = f16s_patterns.view(np.float16).astype(float_t)
-
- # Shift the values by half a bit up or a down (or do not shift),
- if shift == "up":
- f16s_float = 0.5 * (f16s_float[:-1] + f16s_float[1:])[1:]
- elif shift == "down":
- f16s_float = 0.5 * (f16s_float[:-1] + f16s_float[1:])[:-1]
- else:
- f16s_float = f16s_float[1:-1]
-
- # Increase the float by a minimal value:
- if offset == "up":
- f16s_float = np.nextafter(f16s_float, float_t(np.inf))
- elif offset == "down":
- f16s_float = np.nextafter(f16s_float, float_t(-np.inf))
-
- # Convert back to float16 and its bit pattern:
- res_patterns = f16s_float.astype(np.float16).view(np.uint16)
-
- # The above calculations tries the original values, or the exact
- # mid points between the float16 values. It then further offsets them
- # by as little as possible. If no offset occurs, "round to even"
- # logic will be necessary, an arbitrarily small offset should cause
- # normal up/down rounding always.
-
- # Calculate the expected pattern:
- cmp_patterns = f16s_patterns[1:-1].copy()
-
- if shift == "down" and offset != "up":
- shift_pattern = -1
- elif shift == "up" and offset != "down":
- shift_pattern = 1
- else:
- # There cannot be a shift, either shift is None, so all rounding
- # will go back to original, or shift is reduced by offset too much.
- shift_pattern = 0
-
- # If rounding occurs, is it normal rounding or round to even?
- if offset is None:
- # Round to even occurs, modify only non-even, cast to allow + (-1)
- cmp_patterns[0::2].view(np.int16)[...] += shift_pattern
- else:
- cmp_patterns.view(np.int16)[...] += shift_pattern
-
- assert_equal(res_patterns, cmp_patterns)
-
- @pytest.mark.parametrize(["float_t", "uint_t", "bits"],
- [(np.float32, np.uint32, 23),
- (np.float64, np.uint64, 52)])
- def test_half_conversion_denormal_round_even(self, float_t, uint_t, bits):
- # Test specifically that all bits are considered when deciding
- # whether round to even should occur (i.e. no bits are lost at the
- # end. Compare also gh-12721. The most bits can get lost for the
- # smallest denormal:
- smallest_value = np.uint16(1).view(np.float16).astype(float_t)
- assert smallest_value == 2**-24
-
- # Will be rounded to zero based on round to even rule:
- rounded_to_zero = smallest_value / float_t(2)
- assert rounded_to_zero.astype(np.float16) == 0
-
- # The significand will be all 0 for the float_t, test that we do not
- # lose the lower ones of these:
- for i in range(bits):
- # slightly increasing the value should make it round up:
- larger_pattern = rounded_to_zero.view(uint_t) | uint_t(1 << i)
- larger_value = larger_pattern.view(float_t)
- assert larger_value.astype(np.float16) == smallest_value
-
- def test_nans_infs(self):
- with np.errstate(all='ignore'):
- # Check some of the ufuncs
- assert_equal(np.isnan(self.all_f16), np.isnan(self.all_f32))
- assert_equal(np.isinf(self.all_f16), np.isinf(self.all_f32))
- assert_equal(np.isfinite(self.all_f16), np.isfinite(self.all_f32))
- assert_equal(np.signbit(self.all_f16), np.signbit(self.all_f32))
- assert_equal(np.spacing(float16(65504)), np.inf)
-
- # Check comparisons of all values with NaN
- nan = float16(np.nan)
-
- assert_(not (self.all_f16 == nan).any())
- assert_(not (nan == self.all_f16).any())
-
- assert_((self.all_f16 != nan).all())
- assert_((nan != self.all_f16).all())
-
- assert_(not (self.all_f16 < nan).any())
- assert_(not (nan < self.all_f16).any())
-
- assert_(not (self.all_f16 <= nan).any())
- assert_(not (nan <= self.all_f16).any())
-
- assert_(not (self.all_f16 > nan).any())
- assert_(not (nan > self.all_f16).any())
-
- assert_(not (self.all_f16 >= nan).any())
- assert_(not (nan >= self.all_f16).any())
-
- def test_half_values(self):
- """Confirms a small number of known half values"""
- a = np.array([1.0, -1.0,
- 2.0, -2.0,
- 0.0999755859375, 0.333251953125, # 1/10, 1/3
- 65504, -65504, # Maximum magnitude
- 2.0**(-14), -2.0**(-14), # Minimum normal
- 2.0**(-24), -2.0**(-24), # Minimum subnormal
- 0, -1/1e1000, # Signed zeros
- np.inf, -np.inf])
- b = np.array([0x3c00, 0xbc00,
- 0x4000, 0xc000,
- 0x2e66, 0x3555,
- 0x7bff, 0xfbff,
- 0x0400, 0x8400,
- 0x0001, 0x8001,
- 0x0000, 0x8000,
- 0x7c00, 0xfc00], dtype=uint16)
- b.dtype = float16
- assert_equal(a, b)
-
- def test_half_rounding(self):
- """Checks that rounding when converting to half is correct"""
- a = np.array([2.0**-25 + 2.0**-35, # Rounds to minimum subnormal
- 2.0**-25, # Underflows to zero (nearest even mode)
- 2.0**-26, # Underflows to zero
- 1.0+2.0**-11 + 2.0**-16, # rounds to 1.0+2**(-10)
- 1.0+2.0**-11, # rounds to 1.0 (nearest even mode)
- 1.0+2.0**-12, # rounds to 1.0
- 65519, # rounds to 65504
- 65520], # rounds to inf
- dtype=float64)
- rounded = [2.0**-24,
- 0.0,
- 0.0,
- 1.0+2.0**(-10),
- 1.0,
- 1.0,
- 65504,
- np.inf]
-
- # Check float64->float16 rounding
- with np.errstate(over="ignore"):
- b = np.array(a, dtype=float16)
- assert_equal(b, rounded)
-
- # Check float32->float16 rounding
- a = np.array(a, dtype=float32)
- with np.errstate(over="ignore"):
- b = np.array(a, dtype=float16)
- assert_equal(b, rounded)
-
- def test_half_correctness(self):
- """Take every finite float16, and check the casting functions with
- a manual conversion."""
-
- # Create an array of all finite float16s
- a_bits = self.finite_f16.view(dtype=uint16)
-
- # Convert to 64-bit float manually
- a_sgn = (-1.0)**((a_bits & 0x8000) >> 15)
- a_exp = np.array((a_bits & 0x7c00) >> 10, dtype=np.int32) - 15
- a_man = (a_bits & 0x03ff) * 2.0**(-10)
- # Implicit bit of normalized floats
- a_man[a_exp != -15] += 1
- # Denormalized exponent is -14
- a_exp[a_exp == -15] = -14
-
- a_manual = a_sgn * a_man * 2.0**a_exp
-
- a32_fail = np.nonzero(self.finite_f32 != a_manual)[0]
- if len(a32_fail) != 0:
- bad_index = a32_fail[0]
- assert_equal(self.finite_f32, a_manual,
- "First non-equal is half value %x -> %g != %g" %
- (self.finite_f16[bad_index],
- self.finite_f32[bad_index],
- a_manual[bad_index]))
-
- a64_fail = np.nonzero(self.finite_f64 != a_manual)[0]
- if len(a64_fail) != 0:
- bad_index = a64_fail[0]
- assert_equal(self.finite_f64, a_manual,
- "First non-equal is half value %x -> %g != %g" %
- (self.finite_f16[bad_index],
- self.finite_f64[bad_index],
- a_manual[bad_index]))
-
- def test_half_ordering(self):
- """Make sure comparisons are working right"""
-
- # All non-NaN float16 values in reverse order
- a = self.nonan_f16[::-1].copy()
-
- # 32-bit float copy
- b = np.array(a, dtype=float32)
-
- # Should sort the same
- a.sort()
- b.sort()
- assert_equal(a, b)
-
- # Comparisons should work
- assert_((a[:-1] <= a[1:]).all())
- assert_(not (a[:-1] > a[1:]).any())
- assert_((a[1:] >= a[:-1]).all())
- assert_(not (a[1:] < a[:-1]).any())
- # All != except for +/-0
- assert_equal(np.nonzero(a[:-1] < a[1:])[0].size, a.size-2)
- assert_equal(np.nonzero(a[1:] > a[:-1])[0].size, a.size-2)
-
- def test_half_funcs(self):
- """Test the various ArrFuncs"""
-
- # fill
- assert_equal(np.arange(10, dtype=float16),
- np.arange(10, dtype=float32))
-
- # fillwithscalar
- a = np.zeros((5,), dtype=float16)
- a.fill(1)
- assert_equal(a, np.ones((5,), dtype=float16))
-
- # nonzero and copyswap
- a = np.array([0, 0, -1, -1/1e20, 0, 2.0**-24, 7.629e-6], dtype=float16)
- assert_equal(a.nonzero()[0],
- [2, 5, 6])
- a = a.byteswap().newbyteorder()
- assert_equal(a.nonzero()[0],
- [2, 5, 6])
-
- # dot
- a = np.arange(0, 10, 0.5, dtype=float16)
- b = np.ones((20,), dtype=float16)
- assert_equal(np.dot(a, b),
- 95)
-
- # argmax
- a = np.array([0, -np.inf, -2, 0.5, 12.55, 7.3, 2.1, 12.4], dtype=float16)
- assert_equal(a.argmax(),
- 4)
- a = np.array([0, -np.inf, -2, np.inf, 12.55, np.nan, 2.1, 12.4], dtype=float16)
- assert_equal(a.argmax(),
- 5)
-
- # getitem
- a = np.arange(10, dtype=float16)
- for i in range(10):
- assert_equal(a.item(i), i)
-
- def test_spacing_nextafter(self):
- """Test np.spacing and np.nextafter"""
- # All non-negative finite #'s
- a = np.arange(0x7c00, dtype=uint16)
- hinf = np.array((np.inf,), dtype=float16)
- hnan = np.array((np.nan,), dtype=float16)
- a_f16 = a.view(dtype=float16)
-
- assert_equal(np.spacing(a_f16[:-1]), a_f16[1:]-a_f16[:-1])
-
- assert_equal(np.nextafter(a_f16[:-1], hinf), a_f16[1:])
- assert_equal(np.nextafter(a_f16[0], -hinf), -a_f16[1])
- assert_equal(np.nextafter(a_f16[1:], -hinf), a_f16[:-1])
-
- assert_equal(np.nextafter(hinf, a_f16), a_f16[-1])
- assert_equal(np.nextafter(-hinf, a_f16), -a_f16[-1])
-
- assert_equal(np.nextafter(hinf, hinf), hinf)
- assert_equal(np.nextafter(hinf, -hinf), a_f16[-1])
- assert_equal(np.nextafter(-hinf, hinf), -a_f16[-1])
- assert_equal(np.nextafter(-hinf, -hinf), -hinf)
-
- assert_equal(np.nextafter(a_f16, hnan), hnan[0])
- assert_equal(np.nextafter(hnan, a_f16), hnan[0])
-
- assert_equal(np.nextafter(hnan, hnan), hnan)
- assert_equal(np.nextafter(hinf, hnan), hnan)
- assert_equal(np.nextafter(hnan, hinf), hnan)
-
- # switch to negatives
- a |= 0x8000
-
- assert_equal(np.spacing(a_f16[0]), np.spacing(a_f16[1]))
- assert_equal(np.spacing(a_f16[1:]), a_f16[:-1]-a_f16[1:])
-
- assert_equal(np.nextafter(a_f16[0], hinf), -a_f16[1])
- assert_equal(np.nextafter(a_f16[1:], hinf), a_f16[:-1])
- assert_equal(np.nextafter(a_f16[:-1], -hinf), a_f16[1:])
-
- assert_equal(np.nextafter(hinf, a_f16), -a_f16[-1])
- assert_equal(np.nextafter(-hinf, a_f16), a_f16[-1])
-
- assert_equal(np.nextafter(a_f16, hnan), hnan[0])
- assert_equal(np.nextafter(hnan, a_f16), hnan[0])
-
- def test_half_ufuncs(self):
- """Test the various ufuncs"""
-
- a = np.array([0, 1, 2, 4, 2], dtype=float16)
- b = np.array([-2, 5, 1, 4, 3], dtype=float16)
- c = np.array([0, -1, -np.inf, np.nan, 6], dtype=float16)
-
- assert_equal(np.add(a, b), [-2, 6, 3, 8, 5])
- assert_equal(np.subtract(a, b), [2, -4, 1, 0, -1])
- assert_equal(np.multiply(a, b), [0, 5, 2, 16, 6])
- assert_equal(np.divide(a, b), [0, 0.199951171875, 2, 1, 0.66650390625])
-
- assert_equal(np.equal(a, b), [False, False, False, True, False])
- assert_equal(np.not_equal(a, b), [True, True, True, False, True])
- assert_equal(np.less(a, b), [False, True, False, False, True])
- assert_equal(np.less_equal(a, b), [False, True, False, True, True])
- assert_equal(np.greater(a, b), [True, False, True, False, False])
- assert_equal(np.greater_equal(a, b), [True, False, True, True, False])
- assert_equal(np.logical_and(a, b), [False, True, True, True, True])
- assert_equal(np.logical_or(a, b), [True, True, True, True, True])
- assert_equal(np.logical_xor(a, b), [True, False, False, False, False])
- assert_equal(np.logical_not(a), [True, False, False, False, False])
-
- assert_equal(np.isnan(c), [False, False, False, True, False])
- assert_equal(np.isinf(c), [False, False, True, False, False])
- assert_equal(np.isfinite(c), [True, True, False, False, True])
- assert_equal(np.signbit(b), [True, False, False, False, False])
-
- assert_equal(np.copysign(b, a), [2, 5, 1, 4, 3])
-
- assert_equal(np.maximum(a, b), [0, 5, 2, 4, 3])
-
- x = np.maximum(b, c)
- assert_(np.isnan(x[3]))
- x[3] = 0
- assert_equal(x, [0, 5, 1, 0, 6])
-
- assert_equal(np.minimum(a, b), [-2, 1, 1, 4, 2])
-
- x = np.minimum(b, c)
- assert_(np.isnan(x[3]))
- x[3] = 0
- assert_equal(x, [-2, -1, -np.inf, 0, 3])
-
- assert_equal(np.fmax(a, b), [0, 5, 2, 4, 3])
- assert_equal(np.fmax(b, c), [0, 5, 1, 4, 6])
- assert_equal(np.fmin(a, b), [-2, 1, 1, 4, 2])
- assert_equal(np.fmin(b, c), [-2, -1, -np.inf, 4, 3])
-
- assert_equal(np.floor_divide(a, b), [0, 0, 2, 1, 0])
- assert_equal(np.remainder(a, b), [0, 1, 0, 0, 2])
- assert_equal(np.divmod(a, b), ([0, 0, 2, 1, 0], [0, 1, 0, 0, 2]))
- assert_equal(np.square(b), [4, 25, 1, 16, 9])
- assert_equal(np.reciprocal(b), [-0.5, 0.199951171875, 1, 0.25, 0.333251953125])
- assert_equal(np.ones_like(b), [1, 1, 1, 1, 1])
- assert_equal(np.conjugate(b), b)
- assert_equal(np.absolute(b), [2, 5, 1, 4, 3])
- assert_equal(np.negative(b), [2, -5, -1, -4, -3])
- assert_equal(np.positive(b), b)
- assert_equal(np.sign(b), [-1, 1, 1, 1, 1])
- assert_equal(np.modf(b), ([0, 0, 0, 0, 0], b))
- assert_equal(np.frexp(b), ([-0.5, 0.625, 0.5, 0.5, 0.75], [2, 3, 1, 3, 2]))
- assert_equal(np.ldexp(b, [0, 1, 2, 4, 2]), [-2, 10, 4, 64, 12])
-
- @np._no_nep50_warning()
- def test_half_coercion(self, weak_promotion):
- """Test that half gets coerced properly with the other types"""
- a16 = np.array((1,), dtype=float16)
- a32 = np.array((1,), dtype=float32)
- b16 = float16(1)
- b32 = float32(1)
-
- assert np.power(a16, 2).dtype == float16
- assert np.power(a16, 2.0).dtype == float16
- assert np.power(a16, b16).dtype == float16
- expected_dt = float32 if weak_promotion else float16
- assert np.power(a16, b32).dtype == expected_dt
- assert np.power(a16, a16).dtype == float16
- assert np.power(a16, a32).dtype == float32
-
- expected_dt = float16 if weak_promotion else float64
- assert np.power(b16, 2).dtype == expected_dt
- assert np.power(b16, 2.0).dtype == expected_dt
- assert np.power(b16, b16).dtype, float16
- assert np.power(b16, b32).dtype, float32
- assert np.power(b16, a16).dtype, float16
- assert np.power(b16, a32).dtype, float32
-
- assert np.power(a32, a16).dtype == float32
- assert np.power(a32, b16).dtype == float32
- expected_dt = float32 if weak_promotion else float16
- assert np.power(b32, a16).dtype == expected_dt
- assert np.power(b32, b16).dtype == float32
-
- @pytest.mark.skipif(platform.machine() == "armv5tel",
- reason="See gh-413.")
- @pytest.mark.skipif(IS_WASM,
- reason="fp exceptions don't work in wasm.")
- def test_half_fpe(self):
- with np.errstate(all='raise'):
- sx16 = np.array((1e-4,), dtype=float16)
- bx16 = np.array((1e4,), dtype=float16)
- sy16 = float16(1e-4)
- by16 = float16(1e4)
-
- # Underflow errors
- assert_raises_fpe('underflow', lambda a, b:a*b, sx16, sx16)
- assert_raises_fpe('underflow', lambda a, b:a*b, sx16, sy16)
- assert_raises_fpe('underflow', lambda a, b:a*b, sy16, sx16)
- assert_raises_fpe('underflow', lambda a, b:a*b, sy16, sy16)
- assert_raises_fpe('underflow', lambda a, b:a/b, sx16, bx16)
- assert_raises_fpe('underflow', lambda a, b:a/b, sx16, by16)
- assert_raises_fpe('underflow', lambda a, b:a/b, sy16, bx16)
- assert_raises_fpe('underflow', lambda a, b:a/b, sy16, by16)
- assert_raises_fpe('underflow', lambda a, b:a/b,
- float16(2.**-14), float16(2**11))
- assert_raises_fpe('underflow', lambda a, b:a/b,
- float16(-2.**-14), float16(2**11))
- assert_raises_fpe('underflow', lambda a, b:a/b,
- float16(2.**-14+2**-24), float16(2))
- assert_raises_fpe('underflow', lambda a, b:a/b,
- float16(-2.**-14-2**-24), float16(2))
- assert_raises_fpe('underflow', lambda a, b:a/b,
- float16(2.**-14+2**-23), float16(4))
-
- # Overflow errors
- assert_raises_fpe('overflow', lambda a, b:a*b, bx16, bx16)
- assert_raises_fpe('overflow', lambda a, b:a*b, bx16, by16)
- assert_raises_fpe('overflow', lambda a, b:a*b, by16, bx16)
- assert_raises_fpe('overflow', lambda a, b:a*b, by16, by16)
- assert_raises_fpe('overflow', lambda a, b:a/b, bx16, sx16)
- assert_raises_fpe('overflow', lambda a, b:a/b, bx16, sy16)
- assert_raises_fpe('overflow', lambda a, b:a/b, by16, sx16)
- assert_raises_fpe('overflow', lambda a, b:a/b, by16, sy16)
- assert_raises_fpe('overflow', lambda a, b:a+b,
- float16(65504), float16(17))
- assert_raises_fpe('overflow', lambda a, b:a-b,
- float16(-65504), float16(17))
- assert_raises_fpe('overflow', np.nextafter, float16(65504), float16(np.inf))
- assert_raises_fpe('overflow', np.nextafter, float16(-65504), float16(-np.inf))
- assert_raises_fpe('overflow', np.spacing, float16(65504))
-
- # Invalid value errors
- assert_raises_fpe('invalid', np.divide, float16(np.inf), float16(np.inf))
- assert_raises_fpe('invalid', np.spacing, float16(np.inf))
- assert_raises_fpe('invalid', np.spacing, float16(np.nan))
-
- # These should not raise
- float16(65472)+float16(32)
- float16(2**-13)/float16(2)
- float16(2**-14)/float16(2**10)
- np.spacing(float16(-65504))
- np.nextafter(float16(65504), float16(-np.inf))
- np.nextafter(float16(-65504), float16(np.inf))
- np.nextafter(float16(np.inf), float16(0))
- np.nextafter(float16(-np.inf), float16(0))
- np.nextafter(float16(0), float16(np.nan))
- np.nextafter(float16(np.nan), float16(0))
- float16(2**-14)/float16(2**10)
- float16(-2**-14)/float16(2**10)
- float16(2**-14+2**-23)/float16(2)
- float16(-2**-14-2**-23)/float16(2)
-
- def test_half_array_interface(self):
- """Test that half is compatible with __array_interface__"""
- class Dummy:
- pass
-
- a = np.ones((1,), dtype=float16)
- b = Dummy()
- b.__array_interface__ = a.__array_interface__
- c = np.array(b)
- assert_(c.dtype == float16)
- assert_equal(a, c)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/util.py
deleted file mode 100644
index 5501d5b67e7b5defa5574a8466e404b20964193d..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/util.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import logging
-import os
-import re
-import sys
-from enum import Enum
-from typing import Optional
-
-import openai
-
-OPENAI_LOG = os.environ.get("OPENAI_LOG")
-
-logger = logging.getLogger("openai")
-
-__all__ = [
- "log_info",
- "log_debug",
- "log_warn",
- "logfmt",
-]
-
-api_key_to_header = (
- lambda api, key: {"Authorization": f"Bearer {key}"}
- if api in (ApiType.OPEN_AI, ApiType.AZURE_AD)
- else {"api-key": f"{key}"}
-)
-
-
-class ApiType(Enum):
- AZURE = 1
- OPEN_AI = 2
- AZURE_AD = 3
-
- @staticmethod
- def from_str(label):
- if label.lower() == "azure":
- return ApiType.AZURE
- elif label.lower() in ("azure_ad", "azuread"):
- return ApiType.AZURE_AD
- elif label.lower() in ("open_ai", "openai"):
- return ApiType.OPEN_AI
- else:
- raise openai.error.InvalidAPIType(
- "The API type provided in invalid. Please select one of the supported API types: 'azure', 'azure_ad', 'open_ai'"
- )
-
-
-def _console_log_level():
- if openai.log in ["debug", "info"]:
- return openai.log
- elif OPENAI_LOG in ["debug", "info"]:
- return OPENAI_LOG
- else:
- return None
-
-
-def log_debug(message, **params):
- msg = logfmt(dict(message=message, **params))
- if _console_log_level() == "debug":
- print(msg, file=sys.stderr)
- logger.debug(msg)
-
-
-def log_info(message, **params):
- msg = logfmt(dict(message=message, **params))
- if _console_log_level() in ["debug", "info"]:
- print(msg, file=sys.stderr)
- logger.info(msg)
-
-
-def log_warn(message, **params):
- msg = logfmt(dict(message=message, **params))
- print(msg, file=sys.stderr)
- logger.warn(msg)
-
-
-def logfmt(props):
- def fmt(key, val):
- # Handle case where val is a bytes or bytesarray
- if hasattr(val, "decode"):
- val = val.decode("utf-8")
- # Check if val is already a string to avoid re-encoding into ascii.
- if not isinstance(val, str):
- val = str(val)
- if re.search(r"\s", val):
- val = repr(val)
- # key should already be a string
- if re.search(r"\s", key):
- key = repr(key)
- return "{key}={val}".format(key=key, val=val)
-
- return " ".join([fmt(key, val) for key, val in sorted(props.items())])
-
-
-def get_object_classes():
- # This is here to avoid a circular dependency
- from openai.object_classes import OBJECT_CLASSES
-
- return OBJECT_CLASSES
-
-
-def convert_to_openai_object(
- resp,
- api_key=None,
- api_version=None,
- organization=None,
- engine=None,
- plain_old_data=False,
-):
- # If we get a OpenAIResponse, we'll want to return a OpenAIObject.
-
- response_ms: Optional[int] = None
- if isinstance(resp, openai.openai_response.OpenAIResponse):
- organization = resp.organization
- response_ms = resp.response_ms
- resp = resp.data
-
- if plain_old_data:
- return resp
- elif isinstance(resp, list):
- return [
- convert_to_openai_object(
- i, api_key, api_version, organization, engine=engine
- )
- for i in resp
- ]
- elif isinstance(resp, dict) and not isinstance(
- resp, openai.openai_object.OpenAIObject
- ):
- resp = resp.copy()
- klass_name = resp.get("object")
- if isinstance(klass_name, str):
- klass = get_object_classes().get(
- klass_name, openai.openai_object.OpenAIObject
- )
- else:
- klass = openai.openai_object.OpenAIObject
-
- return klass.construct_from(
- resp,
- api_key=api_key,
- api_version=api_version,
- organization=organization,
- response_ms=response_ms,
- engine=engine,
- )
- else:
- return resp
-
-
-def convert_to_dict(obj):
- """Converts a OpenAIObject back to a regular dict.
-
- Nested OpenAIObjects are also converted back to regular dicts.
-
- :param obj: The OpenAIObject to convert.
-
- :returns: The OpenAIObject as a dict.
- """
- if isinstance(obj, list):
- return [convert_to_dict(i) for i in obj]
- # This works by virtue of the fact that OpenAIObjects _are_ dicts. The dict
- # comprehension returns a regular dict and recursively applies the
- # conversion to each value.
- elif isinstance(obj, dict):
- return {k: convert_to_dict(v) for k, v in obj.items()}
- else:
- return obj
-
-
-def merge_dicts(x, y):
- z = x.copy()
- z.update(y)
- return z
-
-
-def default_api_key() -> str:
- if openai.api_key_path:
- with open(openai.api_key_path, "rt") as k:
- api_key = k.read().strip()
- if not api_key.startswith("sk-"):
- raise ValueError(f"Malformed API key in {openai.api_key_path}.")
- return api_key
- elif openai.api_key is not None:
- return openai.api_key
- else:
- raise openai.error.AuthenticationError(
- "No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details."
- )
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/eval.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/eval.py
deleted file mode 100644
index ce0c50a810ab16826fb67f995c0066afb74dc820..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/eval.py
+++ /dev/null
@@ -1,419 +0,0 @@
-"""
-Top level ``eval`` module.
-"""
-from __future__ import annotations
-
-import tokenize
-from typing import TYPE_CHECKING
-import warnings
-
-from pandas.util._exceptions import find_stack_level
-from pandas.util._validators import validate_bool_kwarg
-
-from pandas.core.dtypes.common import is_extension_array_dtype
-
-from pandas.core.computation.engines import ENGINES
-from pandas.core.computation.expr import (
- PARSERS,
- Expr,
-)
-from pandas.core.computation.parsing import tokenize_string
-from pandas.core.computation.scope import ensure_scope
-from pandas.core.generic import NDFrame
-
-from pandas.io.formats.printing import pprint_thing
-
-if TYPE_CHECKING:
- from pandas.core.computation.ops import BinOp
-
-
-def _check_engine(engine: str | None) -> str:
- """
- Make sure a valid engine is passed.
-
- Parameters
- ----------
- engine : str
- String to validate.
-
- Raises
- ------
- KeyError
- * If an invalid engine is passed.
- ImportError
- * If numexpr was requested but doesn't exist.
-
- Returns
- -------
- str
- Engine name.
- """
- from pandas.core.computation.check import NUMEXPR_INSTALLED
- from pandas.core.computation.expressions import USE_NUMEXPR
-
- if engine is None:
- engine = "numexpr" if USE_NUMEXPR else "python"
-
- if engine not in ENGINES:
- valid_engines = list(ENGINES.keys())
- raise KeyError(
- f"Invalid engine '{engine}' passed, valid engines are {valid_engines}"
- )
-
- # TODO: validate this in a more general way (thinking of future engines
- # that won't necessarily be import-able)
- # Could potentially be done on engine instantiation
- if engine == "numexpr" and not NUMEXPR_INSTALLED:
- raise ImportError(
- "'numexpr' is not installed or an unsupported version. Cannot use "
- "engine='numexpr' for query/eval if 'numexpr' is not installed"
- )
-
- return engine
-
-
-def _check_parser(parser: str):
- """
- Make sure a valid parser is passed.
-
- Parameters
- ----------
- parser : str
-
- Raises
- ------
- KeyError
- * If an invalid parser is passed
- """
- if parser not in PARSERS:
- raise KeyError(
- f"Invalid parser '{parser}' passed, valid parsers are {PARSERS.keys()}"
- )
-
-
-def _check_resolvers(resolvers):
- if resolvers is not None:
- for resolver in resolvers:
- if not hasattr(resolver, "__getitem__"):
- name = type(resolver).__name__
- raise TypeError(
- f"Resolver of type '{name}' does not "
- "implement the __getitem__ method"
- )
-
-
-def _check_expression(expr):
- """
- Make sure an expression is not an empty string
-
- Parameters
- ----------
- expr : object
- An object that can be converted to a string
-
- Raises
- ------
- ValueError
- * If expr is an empty string
- """
- if not expr:
- raise ValueError("expr cannot be an empty string")
-
-
-def _convert_expression(expr) -> str:
- """
- Convert an object to an expression.
-
- This function converts an object to an expression (a unicode string) and
- checks to make sure it isn't empty after conversion. This is used to
- convert operators to their string representation for recursive calls to
- :func:`~pandas.eval`.
-
- Parameters
- ----------
- expr : object
- The object to be converted to a string.
-
- Returns
- -------
- str
- The string representation of an object.
-
- Raises
- ------
- ValueError
- * If the expression is empty.
- """
- s = pprint_thing(expr)
- _check_expression(s)
- return s
-
-
-def _check_for_locals(expr: str, stack_level: int, parser: str):
- at_top_of_stack = stack_level == 0
- not_pandas_parser = parser != "pandas"
-
- if not_pandas_parser:
- msg = "The '@' prefix is only supported by the pandas parser"
- elif at_top_of_stack:
- msg = (
- "The '@' prefix is not allowed in top-level eval calls.\n"
- "please refer to your variables by name without the '@' prefix."
- )
-
- if at_top_of_stack or not_pandas_parser:
- for toknum, tokval in tokenize_string(expr):
- if toknum == tokenize.OP and tokval == "@":
- raise SyntaxError(msg)
-
-
-def eval(
- expr: str | BinOp, # we leave BinOp out of the docstr bc it isn't for users
- parser: str = "pandas",
- engine: str | None = None,
- local_dict=None,
- global_dict=None,
- resolvers=(),
- level: int = 0,
- target=None,
- inplace: bool = False,
-):
- """
- Evaluate a Python expression as a string using various backends.
-
- The following arithmetic operations are supported: ``+``, ``-``, ``*``,
- ``/``, ``**``, ``%``, ``//`` (python engine only) along with the following
- boolean operations: ``|`` (or), ``&`` (and), and ``~`` (not).
- Additionally, the ``'pandas'`` parser allows the use of :keyword:`and`,
- :keyword:`or`, and :keyword:`not` with the same semantics as the
- corresponding bitwise operators. :class:`~pandas.Series` and
- :class:`~pandas.DataFrame` objects are supported and behave as they would
- with plain ol' Python evaluation.
-
- Parameters
- ----------
- expr : str
- The expression to evaluate. This string cannot contain any Python
- `statements
- `__,
- only Python `expressions
- `__.
- parser : {'pandas', 'python'}, default 'pandas'
- The parser to use to construct the syntax tree from the expression. The
- default of ``'pandas'`` parses code slightly different than standard
- Python. Alternatively, you can parse an expression using the
- ``'python'`` parser to retain strict Python semantics. See the
- :ref:`enhancing performance ` documentation for
- more details.
- engine : {'python', 'numexpr'}, default 'numexpr'
-
- The engine used to evaluate the expression. Supported engines are
-
- - None : tries to use ``numexpr``, falls back to ``python``
- - ``'numexpr'`` : This default engine evaluates pandas objects using
- numexpr for large speed ups in complex expressions with large frames.
- - ``'python'`` : Performs operations as if you had ``eval``'d in top
- level python. This engine is generally not that useful.
-
- More backends may be available in the future.
- local_dict : dict or None, optional
- A dictionary of local variables, taken from locals() by default.
- global_dict : dict or None, optional
- A dictionary of global variables, taken from globals() by default.
- resolvers : list of dict-like or None, optional
- A list of objects implementing the ``__getitem__`` special method that
- you can use to inject an additional collection of namespaces to use for
- variable lookup. For example, this is used in the
- :meth:`~DataFrame.query` method to inject the
- ``DataFrame.index`` and ``DataFrame.columns``
- variables that refer to their respective :class:`~pandas.DataFrame`
- instance attributes.
- level : int, optional
- The number of prior stack frames to traverse and add to the current
- scope. Most users will **not** need to change this parameter.
- target : object, optional, default None
- This is the target object for assignment. It is used when there is
- variable assignment in the expression. If so, then `target` must
- support item assignment with string keys, and if a copy is being
- returned, it must also support `.copy()`.
- inplace : bool, default False
- If `target` is provided, and the expression mutates `target`, whether
- to modify `target` inplace. Otherwise, return a copy of `target` with
- the mutation.
-
- Returns
- -------
- ndarray, numeric scalar, DataFrame, Series, or None
- The completion value of evaluating the given code or None if ``inplace=True``.
-
- Raises
- ------
- ValueError
- There are many instances where such an error can be raised:
-
- - `target=None`, but the expression is multiline.
- - The expression is multiline, but not all them have item assignment.
- An example of such an arrangement is this:
-
- a = b + 1
- a + 2
-
- Here, there are expressions on different lines, making it multiline,
- but the last line has no variable assigned to the output of `a + 2`.
- - `inplace=True`, but the expression is missing item assignment.
- - Item assignment is provided, but the `target` does not support
- string item assignment.
- - Item assignment is provided and `inplace=False`, but the `target`
- does not support the `.copy()` method
-
- See Also
- --------
- DataFrame.query : Evaluates a boolean expression to query the columns
- of a frame.
- DataFrame.eval : Evaluate a string describing operations on
- DataFrame columns.
-
- Notes
- -----
- The ``dtype`` of any objects involved in an arithmetic ``%`` operation are
- recursively cast to ``float64``.
-
- See the :ref:`enhancing performance ` documentation for
- more details.
-
- Examples
- --------
- >>> df = pd.DataFrame({"animal": ["dog", "pig"], "age": [10, 20]})
- >>> df
- animal age
- 0 dog 10
- 1 pig 20
-
- We can add a new column using ``pd.eval``:
-
- >>> pd.eval("double_age = df.age * 2", target=df)
- animal age double_age
- 0 dog 10 20
- 1 pig 20 40
- """
- inplace = validate_bool_kwarg(inplace, "inplace")
-
- exprs: list[str | BinOp]
- if isinstance(expr, str):
- _check_expression(expr)
- exprs = [e.strip() for e in expr.splitlines() if e.strip() != ""]
- else:
- # ops.BinOp; for internal compat, not intended to be passed by users
- exprs = [expr]
- multi_line = len(exprs) > 1
-
- if multi_line and target is None:
- raise ValueError(
- "multi-line expressions are only valid in the "
- "context of data, use DataFrame.eval"
- )
- engine = _check_engine(engine)
- _check_parser(parser)
- _check_resolvers(resolvers)
-
- ret = None
- first_expr = True
- target_modified = False
-
- for expr in exprs:
- expr = _convert_expression(expr)
- _check_for_locals(expr, level, parser)
-
- # get our (possibly passed-in) scope
- env = ensure_scope(
- level + 1,
- global_dict=global_dict,
- local_dict=local_dict,
- resolvers=resolvers,
- target=target,
- )
-
- parsed_expr = Expr(expr, engine=engine, parser=parser, env=env)
-
- if engine == "numexpr" and (
- is_extension_array_dtype(parsed_expr.terms.return_type)
- or getattr(parsed_expr.terms, "operand_types", None) is not None
- and any(
- is_extension_array_dtype(elem)
- for elem in parsed_expr.terms.operand_types
- )
- ):
- warnings.warn(
- "Engine has switched to 'python' because numexpr does not support "
- "extension array dtypes. Please set your engine to python manually.",
- RuntimeWarning,
- stacklevel=find_stack_level(),
- )
- engine = "python"
-
- # construct the engine and evaluate the parsed expression
- eng = ENGINES[engine]
- eng_inst = eng(parsed_expr)
- ret = eng_inst.evaluate()
-
- if parsed_expr.assigner is None:
- if multi_line:
- raise ValueError(
- "Multi-line expressions are only valid "
- "if all expressions contain an assignment"
- )
- if inplace:
- raise ValueError("Cannot operate inplace if there is no assignment")
-
- # assign if needed
- assigner = parsed_expr.assigner
- if env.target is not None and assigner is not None:
- target_modified = True
-
- # if returning a copy, copy only on the first assignment
- if not inplace and first_expr:
- try:
- target = env.target
- if isinstance(target, NDFrame):
- target = target.copy(deep=None)
- else:
- target = target.copy()
- except AttributeError as err:
- raise ValueError("Cannot return a copy of the target") from err
- else:
- target = env.target
-
- # TypeError is most commonly raised (e.g. int, list), but you
- # get IndexError if you try to do this assignment on np.ndarray.
- # we will ignore numpy warnings here; e.g. if trying
- # to use a non-numeric indexer
- try:
- with warnings.catch_warnings(record=True):
- # TODO: Filter the warnings we actually care about here.
- if inplace and isinstance(target, NDFrame):
- target.loc[:, assigner] = ret
- else:
- target[ # pyright: ignore[reportGeneralTypeIssues]
- assigner
- ] = ret
- except (TypeError, IndexError) as err:
- raise ValueError("Cannot assign expression output to target") from err
-
- if not resolvers:
- resolvers = ({assigner: ret},)
- else:
- # existing resolver needs updated to handle
- # case of mutating existing column in copy
- for resolver in resolvers:
- if assigner in resolver:
- resolver[assigner] = ret
- break
- else:
- resolvers += ({assigner: ret},)
-
- ret = None
- first_expr = False
-
- # We want to exclude `inplace=None` as being False.
- if inplace is False:
- return target if target_modified else ret
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/_util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/_util.py
deleted file mode 100644
index 3b2ae5daffdbaf515a330a54a83e550751e29fdb..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/_util.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from __future__ import annotations
-
-from typing import Callable
-
-from pandas.compat._optional import import_optional_dependency
-
-import pandas as pd
-
-
-def _arrow_dtype_mapping() -> dict:
- pa = import_optional_dependency("pyarrow")
- return {
- pa.int8(): pd.Int8Dtype(),
- pa.int16(): pd.Int16Dtype(),
- pa.int32(): pd.Int32Dtype(),
- pa.int64(): pd.Int64Dtype(),
- pa.uint8(): pd.UInt8Dtype(),
- pa.uint16(): pd.UInt16Dtype(),
- pa.uint32(): pd.UInt32Dtype(),
- pa.uint64(): pd.UInt64Dtype(),
- pa.bool_(): pd.BooleanDtype(),
- pa.string(): pd.StringDtype(),
- pa.float32(): pd.Float32Dtype(),
- pa.float64(): pd.Float64Dtype(),
- }
-
-
-def arrow_string_types_mapper() -> Callable:
- pa = import_optional_dependency("pyarrow")
-
- return {
- pa.string(): pd.StringDtype(storage="pyarrow_numpy"),
- pa.large_string(): pd.StringDtype(storage="pyarrow_numpy"),
- }.get
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/__init__.py
deleted file mode 100644
index 317730745b6e3a0278a48b7bb810cf43e718e787..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from pandas.io.sas.sasreader import read_sas
-
-__all__ = ["read_sas"]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/_tokenizer.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/_tokenizer.py
deleted file mode 100644
index 5f00253e2f67b6f438451bb907480d06ec6c094e..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/html5lib/_tokenizer.py
+++ /dev/null
@@ -1,1735 +0,0 @@
-from __future__ import absolute_import, division, unicode_literals
-
-from pip._vendor.six import unichr as chr
-
-from collections import deque, OrderedDict
-from sys import version_info
-
-from .constants import spaceCharacters
-from .constants import entities
-from .constants import asciiLetters, asciiUpper2Lower
-from .constants import digits, hexDigits, EOF
-from .constants import tokenTypes, tagTokenTypes
-from .constants import replacementCharacters
-
-from ._inputstream import HTMLInputStream
-
-from ._trie import Trie
-
-entitiesTrie = Trie(entities)
-
-if version_info >= (3, 7):
- attributeMap = dict
-else:
- attributeMap = OrderedDict
-
-
-class HTMLTokenizer(object):
- """ This class takes care of tokenizing HTML.
-
- * self.currentToken
- Holds the token that is currently being processed.
-
- * self.state
- Holds a reference to the method to be invoked... XXX
-
- * self.stream
- Points to HTMLInputStream object.
- """
-
- def __init__(self, stream, parser=None, **kwargs):
-
- self.stream = HTMLInputStream(stream, **kwargs)
- self.parser = parser
-
- # Setup the initial tokenizer state
- self.escapeFlag = False
- self.lastFourChars = []
- self.state = self.dataState
- self.escape = False
-
- # The current token being created
- self.currentToken = None
- super(HTMLTokenizer, self).__init__()
-
- def __iter__(self):
- """ This is where the magic happens.
-
- We do our usually processing through the states and when we have a token
- to return we yield the token which pauses processing until the next token
- is requested.
- """
- self.tokenQueue = deque([])
- # Start processing. When EOF is reached self.state will return False
- # instead of True and the loop will terminate.
- while self.state():
- while self.stream.errors:
- yield {"type": tokenTypes["ParseError"], "data": self.stream.errors.pop(0)}
- while self.tokenQueue:
- yield self.tokenQueue.popleft()
-
- def consumeNumberEntity(self, isHex):
- """This function returns either U+FFFD or the character based on the
- decimal or hexadecimal representation. It also discards ";" if present.
- If not present self.tokenQueue.append({"type": tokenTypes["ParseError"]}) is invoked.
- """
-
- allowed = digits
- radix = 10
- if isHex:
- allowed = hexDigits
- radix = 16
-
- charStack = []
-
- # Consume all the characters that are in range while making sure we
- # don't hit an EOF.
- c = self.stream.char()
- while c in allowed and c is not EOF:
- charStack.append(c)
- c = self.stream.char()
-
- # Convert the set of characters consumed to an int.
- charAsInt = int("".join(charStack), radix)
-
- # Certain characters get replaced with others
- if charAsInt in replacementCharacters:
- char = replacementCharacters[charAsInt]
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "illegal-codepoint-for-numeric-entity",
- "datavars": {"charAsInt": charAsInt}})
- elif ((0xD800 <= charAsInt <= 0xDFFF) or
- (charAsInt > 0x10FFFF)):
- char = "\uFFFD"
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "illegal-codepoint-for-numeric-entity",
- "datavars": {"charAsInt": charAsInt}})
- else:
- # Should speed up this check somehow (e.g. move the set to a constant)
- if ((0x0001 <= charAsInt <= 0x0008) or
- (0x000E <= charAsInt <= 0x001F) or
- (0x007F <= charAsInt <= 0x009F) or
- (0xFDD0 <= charAsInt <= 0xFDEF) or
- charAsInt in frozenset([0x000B, 0xFFFE, 0xFFFF, 0x1FFFE,
- 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE,
- 0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE,
- 0x5FFFF, 0x6FFFE, 0x6FFFF, 0x7FFFE,
- 0x7FFFF, 0x8FFFE, 0x8FFFF, 0x9FFFE,
- 0x9FFFF, 0xAFFFE, 0xAFFFF, 0xBFFFE,
- 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE,
- 0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE,
- 0xFFFFF, 0x10FFFE, 0x10FFFF])):
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data":
- "illegal-codepoint-for-numeric-entity",
- "datavars": {"charAsInt": charAsInt}})
- try:
- # Try/except needed as UCS-2 Python builds' unichar only works
- # within the BMP.
- char = chr(charAsInt)
- except ValueError:
- v = charAsInt - 0x10000
- char = chr(0xD800 | (v >> 10)) + chr(0xDC00 | (v & 0x3FF))
-
- # Discard the ; if present. Otherwise, put it back on the queue and
- # invoke parseError on parser.
- if c != ";":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "numeric-entity-without-semicolon"})
- self.stream.unget(c)
-
- return char
-
- def consumeEntity(self, allowedChar=None, fromAttribute=False):
- # Initialise to the default output for when no entity is matched
- output = "&"
-
- charStack = [self.stream.char()]
- if (charStack[0] in spaceCharacters or charStack[0] in (EOF, "<", "&") or
- (allowedChar is not None and allowedChar == charStack[0])):
- self.stream.unget(charStack[0])
-
- elif charStack[0] == "#":
- # Read the next character to see if it's hex or decimal
- hex = False
- charStack.append(self.stream.char())
- if charStack[-1] in ("x", "X"):
- hex = True
- charStack.append(self.stream.char())
-
- # charStack[-1] should be the first digit
- if (hex and charStack[-1] in hexDigits) \
- or (not hex and charStack[-1] in digits):
- # At least one digit found, so consume the whole number
- self.stream.unget(charStack[-1])
- output = self.consumeNumberEntity(hex)
- else:
- # No digits found
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "expected-numeric-entity"})
- self.stream.unget(charStack.pop())
- output = "&" + "".join(charStack)
-
- else:
- # At this point in the process might have named entity. Entities
- # are stored in the global variable "entities".
- #
- # Consume characters and compare to these to a substring of the
- # entity names in the list until the substring no longer matches.
- while (charStack[-1] is not EOF):
- if not entitiesTrie.has_keys_with_prefix("".join(charStack)):
- break
- charStack.append(self.stream.char())
-
- # At this point we have a string that starts with some characters
- # that may match an entity
- # Try to find the longest entity the string will match to take care
- # of ¬i for instance.
- try:
- entityName = entitiesTrie.longest_prefix("".join(charStack[:-1]))
- entityLength = len(entityName)
- except KeyError:
- entityName = None
-
- if entityName is not None:
- if entityName[-1] != ";":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "named-entity-without-semicolon"})
- if (entityName[-1] != ";" and fromAttribute and
- (charStack[entityLength] in asciiLetters or
- charStack[entityLength] in digits or
- charStack[entityLength] == "=")):
- self.stream.unget(charStack.pop())
- output = "&" + "".join(charStack)
- else:
- output = entities[entityName]
- self.stream.unget(charStack.pop())
- output += "".join(charStack[entityLength:])
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-named-entity"})
- self.stream.unget(charStack.pop())
- output = "&" + "".join(charStack)
-
- if fromAttribute:
- self.currentToken["data"][-1][1] += output
- else:
- if output in spaceCharacters:
- tokenType = "SpaceCharacters"
- else:
- tokenType = "Characters"
- self.tokenQueue.append({"type": tokenTypes[tokenType], "data": output})
-
- def processEntityInAttribute(self, allowedChar):
- """This method replaces the need for "entityInAttributeValueState".
- """
- self.consumeEntity(allowedChar=allowedChar, fromAttribute=True)
-
- def emitCurrentToken(self):
- """This method is a generic handler for emitting the tags. It also sets
- the state to "data" because that's what's needed after a token has been
- emitted.
- """
- token = self.currentToken
- # Add token to the queue to be yielded
- if (token["type"] in tagTokenTypes):
- token["name"] = token["name"].translate(asciiUpper2Lower)
- if token["type"] == tokenTypes["StartTag"]:
- raw = token["data"]
- data = attributeMap(raw)
- if len(raw) > len(data):
- # we had some duplicated attribute, fix so first wins
- data.update(raw[::-1])
- token["data"] = data
-
- if token["type"] == tokenTypes["EndTag"]:
- if token["data"]:
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "attributes-in-end-tag"})
- if token["selfClosing"]:
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "self-closing-flag-on-end-tag"})
- self.tokenQueue.append(token)
- self.state = self.dataState
-
- # Below are the various tokenizer states worked out.
- def dataState(self):
- data = self.stream.char()
- if data == "&":
- self.state = self.entityDataState
- elif data == "<":
- self.state = self.tagOpenState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\u0000"})
- elif data is EOF:
- # Tokenization ends.
- return False
- elif data in spaceCharacters:
- # Directly after emitting a token you switch back to the "data
- # state". At that point spaceCharacters are important so they are
- # emitted separately.
- self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data":
- data + self.stream.charsUntil(spaceCharacters, True)})
- # No need to update lastFourChars here, since the first space will
- # have already been appended to lastFourChars and will have broken
- # any sequences
- else:
- chars = self.stream.charsUntil(("&", "<", "\u0000"))
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + chars})
- return True
-
- def entityDataState(self):
- self.consumeEntity()
- self.state = self.dataState
- return True
-
- def rcdataState(self):
- data = self.stream.char()
- if data == "&":
- self.state = self.characterReferenceInRcdata
- elif data == "<":
- self.state = self.rcdataLessThanSignState
- elif data == EOF:
- # Tokenization ends.
- return False
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- elif data in spaceCharacters:
- # Directly after emitting a token you switch back to the "data
- # state". At that point spaceCharacters are important so they are
- # emitted separately.
- self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data":
- data + self.stream.charsUntil(spaceCharacters, True)})
- # No need to update lastFourChars here, since the first space will
- # have already been appended to lastFourChars and will have broken
- # any sequences
- else:
- chars = self.stream.charsUntil(("&", "<", "\u0000"))
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + chars})
- return True
-
- def characterReferenceInRcdata(self):
- self.consumeEntity()
- self.state = self.rcdataState
- return True
-
- def rawtextState(self):
- data = self.stream.char()
- if data == "<":
- self.state = self.rawtextLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- elif data == EOF:
- # Tokenization ends.
- return False
- else:
- chars = self.stream.charsUntil(("<", "\u0000"))
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + chars})
- return True
-
- def scriptDataState(self):
- data = self.stream.char()
- if data == "<":
- self.state = self.scriptDataLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- elif data == EOF:
- # Tokenization ends.
- return False
- else:
- chars = self.stream.charsUntil(("<", "\u0000"))
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + chars})
- return True
-
- def plaintextState(self):
- data = self.stream.char()
- if data == EOF:
- # Tokenization ends.
- return False
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + self.stream.charsUntil("\u0000")})
- return True
-
- def tagOpenState(self):
- data = self.stream.char()
- if data == "!":
- self.state = self.markupDeclarationOpenState
- elif data == "/":
- self.state = self.closeTagOpenState
- elif data in asciiLetters:
- self.currentToken = {"type": tokenTypes["StartTag"],
- "name": data, "data": [],
- "selfClosing": False,
- "selfClosingAcknowledged": False}
- self.state = self.tagNameState
- elif data == ">":
- # XXX In theory it could be something besides a tag name. But
- # do we really care?
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-tag-name-but-got-right-bracket"})
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<>"})
- self.state = self.dataState
- elif data == "?":
- # XXX In theory it could be something besides a tag name. But
- # do we really care?
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-tag-name-but-got-question-mark"})
- self.stream.unget(data)
- self.state = self.bogusCommentState
- else:
- # XXX
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-tag-name"})
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.stream.unget(data)
- self.state = self.dataState
- return True
-
- def closeTagOpenState(self):
- data = self.stream.char()
- if data in asciiLetters:
- self.currentToken = {"type": tokenTypes["EndTag"], "name": data,
- "data": [], "selfClosing": False}
- self.state = self.tagNameState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-closing-tag-but-got-right-bracket"})
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-closing-tag-but-got-eof"})
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ""})
- self.state = self.dataState
- else:
- # XXX data can be _'_...
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-closing-tag-but-got-char",
- "datavars": {"data": data}})
- self.stream.unget(data)
- self.state = self.bogusCommentState
- return True
-
- def tagNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeAttributeNameState
- elif data == ">":
- self.emitCurrentToken()
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-tag-name"})
- self.state = self.dataState
- elif data == "/":
- self.state = self.selfClosingStartTagState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["name"] += "\uFFFD"
- else:
- self.currentToken["name"] += data
- # (Don't use charsUntil here, because tag names are
- # very short and it's faster to not do anything fancy)
- return True
-
- def rcdataLessThanSignState(self):
- data = self.stream.char()
- if data == "/":
- self.temporaryBuffer = ""
- self.state = self.rcdataEndTagOpenState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.stream.unget(data)
- self.state = self.rcdataState
- return True
-
- def rcdataEndTagOpenState(self):
- data = self.stream.char()
- if data in asciiLetters:
- self.temporaryBuffer += data
- self.state = self.rcdataEndTagNameState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ""})
- self.stream.unget(data)
- self.state = self.rcdataState
- return True
-
- def rcdataEndTagNameState(self):
- appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
- data = self.stream.char()
- if data in spaceCharacters and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.beforeAttributeNameState
- elif data == "/" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.selfClosingStartTagState
- elif data == ">" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.emitCurrentToken()
- self.state = self.dataState
- elif data in asciiLetters:
- self.temporaryBuffer += data
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "" + self.temporaryBuffer})
- self.stream.unget(data)
- self.state = self.rcdataState
- return True
-
- def rawtextLessThanSignState(self):
- data = self.stream.char()
- if data == "/":
- self.temporaryBuffer = ""
- self.state = self.rawtextEndTagOpenState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.stream.unget(data)
- self.state = self.rawtextState
- return True
-
- def rawtextEndTagOpenState(self):
- data = self.stream.char()
- if data in asciiLetters:
- self.temporaryBuffer += data
- self.state = self.rawtextEndTagNameState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ""})
- self.stream.unget(data)
- self.state = self.rawtextState
- return True
-
- def rawtextEndTagNameState(self):
- appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
- data = self.stream.char()
- if data in spaceCharacters and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.beforeAttributeNameState
- elif data == "/" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.selfClosingStartTagState
- elif data == ">" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.emitCurrentToken()
- self.state = self.dataState
- elif data in asciiLetters:
- self.temporaryBuffer += data
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "" + self.temporaryBuffer})
- self.stream.unget(data)
- self.state = self.rawtextState
- return True
-
- def scriptDataLessThanSignState(self):
- data = self.stream.char()
- if data == "/":
- self.temporaryBuffer = ""
- self.state = self.scriptDataEndTagOpenState
- elif data == "!":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.emitCurrentToken()
- self.state = self.dataState
- elif data in asciiLetters:
- self.temporaryBuffer += data
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "" + self.temporaryBuffer})
- self.stream.unget(data)
- self.state = self.scriptDataState
- return True
-
- def scriptDataEscapeStartState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataEscapeStartDashState
- else:
- self.stream.unget(data)
- self.state = self.scriptDataState
- return True
-
- def scriptDataEscapeStartDashState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataEscapedDashDashState
- else:
- self.stream.unget(data)
- self.state = self.scriptDataState
- return True
-
- def scriptDataEscapedState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataEscapedDashState
- elif data == "<":
- self.state = self.scriptDataEscapedLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- elif data == EOF:
- self.state = self.dataState
- else:
- chars = self.stream.charsUntil(("<", "-", "\u0000"))
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
- data + chars})
- return True
-
- def scriptDataEscapedDashState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataEscapedDashDashState
- elif data == "<":
- self.state = self.scriptDataEscapedLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- self.state = self.scriptDataEscapedState
- elif data == EOF:
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataEscapedDashDashState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- elif data == "<":
- self.state = self.scriptDataEscapedLessThanSignState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"})
- self.state = self.scriptDataState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- self.state = self.scriptDataEscapedState
- elif data == EOF:
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataEscapedLessThanSignState(self):
- data = self.stream.char()
- if data == "/":
- self.temporaryBuffer = ""
- self.state = self.scriptDataEscapedEndTagOpenState
- elif data in asciiLetters:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<" + data})
- self.temporaryBuffer = data
- self.state = self.scriptDataDoubleEscapeStartState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.stream.unget(data)
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataEscapedEndTagOpenState(self):
- data = self.stream.char()
- if data in asciiLetters:
- self.temporaryBuffer = data
- self.state = self.scriptDataEscapedEndTagNameState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ""})
- self.stream.unget(data)
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataEscapedEndTagNameState(self):
- appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
- data = self.stream.char()
- if data in spaceCharacters and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.beforeAttributeNameState
- elif data == "/" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.state = self.selfClosingStartTagState
- elif data == ">" and appropriate:
- self.currentToken = {"type": tokenTypes["EndTag"],
- "name": self.temporaryBuffer,
- "data": [], "selfClosing": False}
- self.emitCurrentToken()
- self.state = self.dataState
- elif data in asciiLetters:
- self.temporaryBuffer += data
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "" + self.temporaryBuffer})
- self.stream.unget(data)
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataDoubleEscapeStartState(self):
- data = self.stream.char()
- if data in (spaceCharacters | frozenset(("/", ">"))):
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- if self.temporaryBuffer.lower() == "script":
- self.state = self.scriptDataDoubleEscapedState
- else:
- self.state = self.scriptDataEscapedState
- elif data in asciiLetters:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.temporaryBuffer += data
- else:
- self.stream.unget(data)
- self.state = self.scriptDataEscapedState
- return True
-
- def scriptDataDoubleEscapedState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataDoubleEscapedDashState
- elif data == "<":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.state = self.scriptDataDoubleEscapedLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- elif data == EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-script-in-script"})
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- return True
-
- def scriptDataDoubleEscapedDashState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- self.state = self.scriptDataDoubleEscapedDashDashState
- elif data == "<":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.state = self.scriptDataDoubleEscapedLessThanSignState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- self.state = self.scriptDataDoubleEscapedState
- elif data == EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-script-in-script"})
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.state = self.scriptDataDoubleEscapedState
- return True
-
- def scriptDataDoubleEscapedDashDashState(self):
- data = self.stream.char()
- if data == "-":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
- elif data == "<":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
- self.state = self.scriptDataDoubleEscapedLessThanSignState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"})
- self.state = self.scriptDataState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": "\uFFFD"})
- self.state = self.scriptDataDoubleEscapedState
- elif data == EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-script-in-script"})
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.state = self.scriptDataDoubleEscapedState
- return True
-
- def scriptDataDoubleEscapedLessThanSignState(self):
- data = self.stream.char()
- if data == "/":
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "/"})
- self.temporaryBuffer = ""
- self.state = self.scriptDataDoubleEscapeEndState
- else:
- self.stream.unget(data)
- self.state = self.scriptDataDoubleEscapedState
- return True
-
- def scriptDataDoubleEscapeEndState(self):
- data = self.stream.char()
- if data in (spaceCharacters | frozenset(("/", ">"))):
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- if self.temporaryBuffer.lower() == "script":
- self.state = self.scriptDataEscapedState
- else:
- self.state = self.scriptDataDoubleEscapedState
- elif data in asciiLetters:
- self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
- self.temporaryBuffer += data
- else:
- self.stream.unget(data)
- self.state = self.scriptDataDoubleEscapedState
- return True
-
- def beforeAttributeNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.stream.charsUntil(spaceCharacters, True)
- elif data in asciiLetters:
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- elif data == ">":
- self.emitCurrentToken()
- elif data == "/":
- self.state = self.selfClosingStartTagState
- elif data in ("'", '"', "=", "<"):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "invalid-character-in-attribute-name"})
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"].append(["\uFFFD", ""])
- self.state = self.attributeNameState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-attribute-name-but-got-eof"})
- self.state = self.dataState
- else:
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- return True
-
- def attributeNameState(self):
- data = self.stream.char()
- leavingThisState = True
- emitToken = False
- if data == "=":
- self.state = self.beforeAttributeValueState
- elif data in asciiLetters:
- self.currentToken["data"][-1][0] += data +\
- self.stream.charsUntil(asciiLetters, True)
- leavingThisState = False
- elif data == ">":
- # XXX If we emit here the attributes are converted to a dict
- # without being checked and when the code below runs we error
- # because data is a dict not a list
- emitToken = True
- elif data in spaceCharacters:
- self.state = self.afterAttributeNameState
- elif data == "/":
- self.state = self.selfClosingStartTagState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"][-1][0] += "\uFFFD"
- leavingThisState = False
- elif data in ("'", '"', "<"):
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data":
- "invalid-character-in-attribute-name"})
- self.currentToken["data"][-1][0] += data
- leavingThisState = False
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "eof-in-attribute-name"})
- self.state = self.dataState
- else:
- self.currentToken["data"][-1][0] += data
- leavingThisState = False
-
- if leavingThisState:
- # Attributes are not dropped at this stage. That happens when the
- # start tag token is emitted so values can still be safely appended
- # to attributes, but we do want to report the parse error in time.
- self.currentToken["data"][-1][0] = (
- self.currentToken["data"][-1][0].translate(asciiUpper2Lower))
- for name, _ in self.currentToken["data"][:-1]:
- if self.currentToken["data"][-1][0] == name:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "duplicate-attribute"})
- break
- # XXX Fix for above XXX
- if emitToken:
- self.emitCurrentToken()
- return True
-
- def afterAttributeNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.stream.charsUntil(spaceCharacters, True)
- elif data == "=":
- self.state = self.beforeAttributeValueState
- elif data == ">":
- self.emitCurrentToken()
- elif data in asciiLetters:
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- elif data == "/":
- self.state = self.selfClosingStartTagState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"].append(["\uFFFD", ""])
- self.state = self.attributeNameState
- elif data in ("'", '"', "<"):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "invalid-character-after-attribute-name"})
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-end-of-tag-but-got-eof"})
- self.state = self.dataState
- else:
- self.currentToken["data"].append([data, ""])
- self.state = self.attributeNameState
- return True
-
- def beforeAttributeValueState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.stream.charsUntil(spaceCharacters, True)
- elif data == "\"":
- self.state = self.attributeValueDoubleQuotedState
- elif data == "&":
- self.state = self.attributeValueUnQuotedState
- self.stream.unget(data)
- elif data == "'":
- self.state = self.attributeValueSingleQuotedState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-attribute-value-but-got-right-bracket"})
- self.emitCurrentToken()
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"][-1][1] += "\uFFFD"
- self.state = self.attributeValueUnQuotedState
- elif data in ("=", "<", "`"):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "equals-in-unquoted-attribute-value"})
- self.currentToken["data"][-1][1] += data
- self.state = self.attributeValueUnQuotedState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-attribute-value-but-got-eof"})
- self.state = self.dataState
- else:
- self.currentToken["data"][-1][1] += data
- self.state = self.attributeValueUnQuotedState
- return True
-
- def attributeValueDoubleQuotedState(self):
- data = self.stream.char()
- if data == "\"":
- self.state = self.afterAttributeValueState
- elif data == "&":
- self.processEntityInAttribute('"')
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"][-1][1] += "\uFFFD"
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-attribute-value-double-quote"})
- self.state = self.dataState
- else:
- self.currentToken["data"][-1][1] += data +\
- self.stream.charsUntil(("\"", "&", "\u0000"))
- return True
-
- def attributeValueSingleQuotedState(self):
- data = self.stream.char()
- if data == "'":
- self.state = self.afterAttributeValueState
- elif data == "&":
- self.processEntityInAttribute("'")
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"][-1][1] += "\uFFFD"
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-attribute-value-single-quote"})
- self.state = self.dataState
- else:
- self.currentToken["data"][-1][1] += data +\
- self.stream.charsUntil(("'", "&", "\u0000"))
- return True
-
- def attributeValueUnQuotedState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeAttributeNameState
- elif data == "&":
- self.processEntityInAttribute(">")
- elif data == ">":
- self.emitCurrentToken()
- elif data in ('"', "'", "=", "<", "`"):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-character-in-unquoted-attribute-value"})
- self.currentToken["data"][-1][1] += data
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"][-1][1] += "\uFFFD"
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-attribute-value-no-quotes"})
- self.state = self.dataState
- else:
- self.currentToken["data"][-1][1] += data + self.stream.charsUntil(
- frozenset(("&", ">", '"', "'", "=", "<", "`", "\u0000")) | spaceCharacters)
- return True
-
- def afterAttributeValueState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeAttributeNameState
- elif data == ">":
- self.emitCurrentToken()
- elif data == "/":
- self.state = self.selfClosingStartTagState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-EOF-after-attribute-value"})
- self.stream.unget(data)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-character-after-attribute-value"})
- self.stream.unget(data)
- self.state = self.beforeAttributeNameState
- return True
-
- def selfClosingStartTagState(self):
- data = self.stream.char()
- if data == ">":
- self.currentToken["selfClosing"] = True
- self.emitCurrentToken()
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data":
- "unexpected-EOF-after-solidus-in-tag"})
- self.stream.unget(data)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-character-after-solidus-in-tag"})
- self.stream.unget(data)
- self.state = self.beforeAttributeNameState
- return True
-
- def bogusCommentState(self):
- # Make a new comment token and give it as value all the characters
- # until the first > or EOF (charsUntil checks for EOF automatically)
- # and emit it.
- data = self.stream.charsUntil(">")
- data = data.replace("\u0000", "\uFFFD")
- self.tokenQueue.append(
- {"type": tokenTypes["Comment"], "data": data})
-
- # Eat the character directly after the bogus comment which is either a
- # ">" or an EOF.
- self.stream.char()
- self.state = self.dataState
- return True
-
- def markupDeclarationOpenState(self):
- charStack = [self.stream.char()]
- if charStack[-1] == "-":
- charStack.append(self.stream.char())
- if charStack[-1] == "-":
- self.currentToken = {"type": tokenTypes["Comment"], "data": ""}
- self.state = self.commentStartState
- return True
- elif charStack[-1] in ('d', 'D'):
- matched = True
- for expected in (('o', 'O'), ('c', 'C'), ('t', 'T'),
- ('y', 'Y'), ('p', 'P'), ('e', 'E')):
- charStack.append(self.stream.char())
- if charStack[-1] not in expected:
- matched = False
- break
- if matched:
- self.currentToken = {"type": tokenTypes["Doctype"],
- "name": "",
- "publicId": None, "systemId": None,
- "correct": True}
- self.state = self.doctypeState
- return True
- elif (charStack[-1] == "[" and
- self.parser is not None and
- self.parser.tree.openElements and
- self.parser.tree.openElements[-1].namespace != self.parser.tree.defaultNamespace):
- matched = True
- for expected in ["C", "D", "A", "T", "A", "["]:
- charStack.append(self.stream.char())
- if charStack[-1] != expected:
- matched = False
- break
- if matched:
- self.state = self.cdataSectionState
- return True
-
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-dashes-or-doctype"})
-
- while charStack:
- self.stream.unget(charStack.pop())
- self.state = self.bogusCommentState
- return True
-
- def commentStartState(self):
- data = self.stream.char()
- if data == "-":
- self.state = self.commentStartDashState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "incorrect-comment"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-comment"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["data"] += data
- self.state = self.commentState
- return True
-
- def commentStartDashState(self):
- data = self.stream.char()
- if data == "-":
- self.state = self.commentEndState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "-\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "incorrect-comment"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-comment"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["data"] += "-" + data
- self.state = self.commentState
- return True
-
- def commentState(self):
- data = self.stream.char()
- if data == "-":
- self.state = self.commentEndDashState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "\uFFFD"
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "eof-in-comment"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["data"] += data + \
- self.stream.charsUntil(("-", "\u0000"))
- return True
-
- def commentEndDashState(self):
- data = self.stream.char()
- if data == "-":
- self.state = self.commentEndState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "-\uFFFD"
- self.state = self.commentState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-comment-end-dash"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["data"] += "-" + data
- self.state = self.commentState
- return True
-
- def commentEndState(self):
- data = self.stream.char()
- if data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "--\uFFFD"
- self.state = self.commentState
- elif data == "!":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-bang-after-double-dash-in-comment"})
- self.state = self.commentEndBangState
- elif data == "-":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-dash-after-double-dash-in-comment"})
- self.currentToken["data"] += data
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-comment-double-dash"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- # XXX
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-comment"})
- self.currentToken["data"] += "--" + data
- self.state = self.commentState
- return True
-
- def commentEndBangState(self):
- data = self.stream.char()
- if data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == "-":
- self.currentToken["data"] += "--!"
- self.state = self.commentEndDashState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["data"] += "--!\uFFFD"
- self.state = self.commentState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-comment-end-bang-state"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["data"] += "--!" + data
- self.state = self.commentState
- return True
-
- def doctypeState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeDoctypeNameState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-doctype-name-but-got-eof"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "need-space-after-doctype"})
- self.stream.unget(data)
- self.state = self.beforeDoctypeNameState
- return True
-
- def beforeDoctypeNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-doctype-name-but-got-right-bracket"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["name"] = "\uFFFD"
- self.state = self.doctypeNameState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-doctype-name-but-got-eof"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["name"] = data
- self.state = self.doctypeNameState
- return True
-
- def doctypeNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
- self.state = self.afterDoctypeNameState
- elif data == ">":
- self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["name"] += "\uFFFD"
- self.state = self.doctypeNameState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype-name"})
- self.currentToken["correct"] = False
- self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["name"] += data
- return True
-
- def afterDoctypeNameState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.currentToken["correct"] = False
- self.stream.unget(data)
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- if data in ("p", "P"):
- matched = True
- for expected in (("u", "U"), ("b", "B"), ("l", "L"),
- ("i", "I"), ("c", "C")):
- data = self.stream.char()
- if data not in expected:
- matched = False
- break
- if matched:
- self.state = self.afterDoctypePublicKeywordState
- return True
- elif data in ("s", "S"):
- matched = True
- for expected in (("y", "Y"), ("s", "S"), ("t", "T"),
- ("e", "E"), ("m", "M")):
- data = self.stream.char()
- if data not in expected:
- matched = False
- break
- if matched:
- self.state = self.afterDoctypeSystemKeywordState
- return True
-
- # All the characters read before the current 'data' will be
- # [a-zA-Z], so they're garbage in the bogus doctype and can be
- # discarded; only the latest character might be '>' or EOF
- # and needs to be ungetted
- self.stream.unget(data)
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "expected-space-or-right-bracket-in-doctype", "datavars":
- {"data": data}})
- self.currentToken["correct"] = False
- self.state = self.bogusDoctypeState
-
- return True
-
- def afterDoctypePublicKeywordState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeDoctypePublicIdentifierState
- elif data in ("'", '"'):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.stream.unget(data)
- self.state = self.beforeDoctypePublicIdentifierState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.stream.unget(data)
- self.state = self.beforeDoctypePublicIdentifierState
- return True
-
- def beforeDoctypePublicIdentifierState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == "\"":
- self.currentToken["publicId"] = ""
- self.state = self.doctypePublicIdentifierDoubleQuotedState
- elif data == "'":
- self.currentToken["publicId"] = ""
- self.state = self.doctypePublicIdentifierSingleQuotedState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-end-of-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["correct"] = False
- self.state = self.bogusDoctypeState
- return True
-
- def doctypePublicIdentifierDoubleQuotedState(self):
- data = self.stream.char()
- if data == "\"":
- self.state = self.afterDoctypePublicIdentifierState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["publicId"] += "\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-end-of-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["publicId"] += data
- return True
-
- def doctypePublicIdentifierSingleQuotedState(self):
- data = self.stream.char()
- if data == "'":
- self.state = self.afterDoctypePublicIdentifierState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["publicId"] += "\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-end-of-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["publicId"] += data
- return True
-
- def afterDoctypePublicIdentifierState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.betweenDoctypePublicAndSystemIdentifiersState
- elif data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == '"':
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierDoubleQuotedState
- elif data == "'":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierSingleQuotedState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["correct"] = False
- self.state = self.bogusDoctypeState
- return True
-
- def betweenDoctypePublicAndSystemIdentifiersState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data == '"':
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierDoubleQuotedState
- elif data == "'":
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierSingleQuotedState
- elif data == EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["correct"] = False
- self.state = self.bogusDoctypeState
- return True
-
- def afterDoctypeSystemKeywordState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- self.state = self.beforeDoctypeSystemIdentifierState
- elif data in ("'", '"'):
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.stream.unget(data)
- self.state = self.beforeDoctypeSystemIdentifierState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.stream.unget(data)
- self.state = self.beforeDoctypeSystemIdentifierState
- return True
-
- def beforeDoctypeSystemIdentifierState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == "\"":
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierDoubleQuotedState
- elif data == "'":
- self.currentToken["systemId"] = ""
- self.state = self.doctypeSystemIdentifierSingleQuotedState
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.currentToken["correct"] = False
- self.state = self.bogusDoctypeState
- return True
-
- def doctypeSystemIdentifierDoubleQuotedState(self):
- data = self.stream.char()
- if data == "\"":
- self.state = self.afterDoctypeSystemIdentifierState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["systemId"] += "\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-end-of-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["systemId"] += data
- return True
-
- def doctypeSystemIdentifierSingleQuotedState(self):
- data = self.stream.char()
- if data == "'":
- self.state = self.afterDoctypeSystemIdentifierState
- elif data == "\u0000":
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- self.currentToken["systemId"] += "\uFFFD"
- elif data == ">":
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-end-of-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.currentToken["systemId"] += data
- return True
-
- def afterDoctypeSystemIdentifierState(self):
- data = self.stream.char()
- if data in spaceCharacters:
- pass
- elif data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "eof-in-doctype"})
- self.currentToken["correct"] = False
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
- "unexpected-char-in-doctype"})
- self.state = self.bogusDoctypeState
- return True
-
- def bogusDoctypeState(self):
- data = self.stream.char()
- if data == ">":
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- elif data is EOF:
- # XXX EMIT
- self.stream.unget(data)
- self.tokenQueue.append(self.currentToken)
- self.state = self.dataState
- else:
- pass
- return True
-
- def cdataSectionState(self):
- data = []
- while True:
- data.append(self.stream.charsUntil("]"))
- data.append(self.stream.charsUntil(">"))
- char = self.stream.char()
- if char == EOF:
- break
- else:
- assert char == ">"
- if data[-1][-2:] == "]]":
- data[-1] = data[-1][:-2]
- break
- else:
- data.append(char)
-
- data = "".join(data) # pylint:disable=redefined-variable-type
- # Deal with null here rather than in the parser
- nullCount = data.count("\u0000")
- if nullCount > 0:
- for _ in range(nullCount):
- self.tokenQueue.append({"type": tokenTypes["ParseError"],
- "data": "invalid-codepoint"})
- data = data.replace("\u0000", "\uFFFD")
- if data:
- self.tokenQueue.append({"type": tokenTypes["Characters"],
- "data": data})
- self.state = self.dataState
- return True
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/env_settings.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/env_settings.py
deleted file mode 100644
index 6c446e51c6abf91a61edd87554aa05b27af7f2e3..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/env_settings.py
+++ /dev/null
@@ -1,350 +0,0 @@
-import os
-import warnings
-from pathlib import Path
-from typing import AbstractSet, Any, Callable, ClassVar, Dict, List, Mapping, Optional, Tuple, Type, Union
-
-from .config import BaseConfig, Extra
-from .fields import ModelField
-from .main import BaseModel
-from .types import JsonWrapper
-from .typing import StrPath, display_as_type, get_origin, is_union
-from .utils import deep_update, lenient_issubclass, path_type, sequence_like
-
-env_file_sentinel = str(object())
-
-SettingsSourceCallable = Callable[['BaseSettings'], Dict[str, Any]]
-DotenvType = Union[StrPath, List[StrPath], Tuple[StrPath, ...]]
-
-
-class SettingsError(ValueError):
- pass
-
-
-class BaseSettings(BaseModel):
- """
- Base class for settings, allowing values to be overridden by environment variables.
-
- This is useful in production for secrets you do not wish to save in code, it plays nicely with docker(-compose),
- Heroku and any 12 factor app design.
- """
-
- def __init__(
- __pydantic_self__,
- _env_file: Optional[DotenvType] = env_file_sentinel,
- _env_file_encoding: Optional[str] = None,
- _env_nested_delimiter: Optional[str] = None,
- _secrets_dir: Optional[StrPath] = None,
- **values: Any,
- ) -> None:
- # Uses something other than `self` the first arg to allow "self" as a settable attribute
- super().__init__(
- **__pydantic_self__._build_values(
- values,
- _env_file=_env_file,
- _env_file_encoding=_env_file_encoding,
- _env_nested_delimiter=_env_nested_delimiter,
- _secrets_dir=_secrets_dir,
- )
- )
-
- def _build_values(
- self,
- init_kwargs: Dict[str, Any],
- _env_file: Optional[DotenvType] = None,
- _env_file_encoding: Optional[str] = None,
- _env_nested_delimiter: Optional[str] = None,
- _secrets_dir: Optional[StrPath] = None,
- ) -> Dict[str, Any]:
- # Configure built-in sources
- init_settings = InitSettingsSource(init_kwargs=init_kwargs)
- env_settings = EnvSettingsSource(
- env_file=(_env_file if _env_file != env_file_sentinel else self.__config__.env_file),
- env_file_encoding=(
- _env_file_encoding if _env_file_encoding is not None else self.__config__.env_file_encoding
- ),
- env_nested_delimiter=(
- _env_nested_delimiter if _env_nested_delimiter is not None else self.__config__.env_nested_delimiter
- ),
- env_prefix_len=len(self.__config__.env_prefix),
- )
- file_secret_settings = SecretsSettingsSource(secrets_dir=_secrets_dir or self.__config__.secrets_dir)
- # Provide a hook to set built-in sources priority and add / remove sources
- sources = self.__config__.customise_sources(
- init_settings=init_settings, env_settings=env_settings, file_secret_settings=file_secret_settings
- )
- if sources:
- return deep_update(*reversed([source(self) for source in sources]))
- else:
- # no one should mean to do this, but I think returning an empty dict is marginally preferable
- # to an informative error and much better than a confusing error
- return {}
-
- class Config(BaseConfig):
- env_prefix: str = ''
- env_file: Optional[DotenvType] = None
- env_file_encoding: Optional[str] = None
- env_nested_delimiter: Optional[str] = None
- secrets_dir: Optional[StrPath] = None
- validate_all: bool = True
- extra: Extra = Extra.forbid
- arbitrary_types_allowed: bool = True
- case_sensitive: bool = False
-
- @classmethod
- def prepare_field(cls, field: ModelField) -> None:
- env_names: Union[List[str], AbstractSet[str]]
- field_info_from_config = cls.get_field_info(field.name)
-
- env = field_info_from_config.get('env') or field.field_info.extra.get('env')
- if env is None:
- if field.has_alias:
- warnings.warn(
- 'aliases are no longer used by BaseSettings to define which environment variables to read. '
- 'Instead use the "env" field setting. '
- 'See https://pydantic-docs.helpmanual.io/usage/settings/#environment-variable-names',
- FutureWarning,
- )
- env_names = {cls.env_prefix + field.name}
- elif isinstance(env, str):
- env_names = {env}
- elif isinstance(env, (set, frozenset)):
- env_names = env
- elif sequence_like(env):
- env_names = list(env)
- else:
- raise TypeError(f'invalid field env: {env!r} ({display_as_type(env)}); should be string, list or set')
-
- if not cls.case_sensitive:
- env_names = env_names.__class__(n.lower() for n in env_names)
- field.field_info.extra['env_names'] = env_names
-
- @classmethod
- def customise_sources(
- cls,
- init_settings: SettingsSourceCallable,
- env_settings: SettingsSourceCallable,
- file_secret_settings: SettingsSourceCallable,
- ) -> Tuple[SettingsSourceCallable, ...]:
- return init_settings, env_settings, file_secret_settings
-
- @classmethod
- def parse_env_var(cls, field_name: str, raw_val: str) -> Any:
- return cls.json_loads(raw_val)
-
- # populated by the metaclass using the Config class defined above, annotated here to help IDEs only
- __config__: ClassVar[Type[Config]]
-
-
-class InitSettingsSource:
- __slots__ = ('init_kwargs',)
-
- def __init__(self, init_kwargs: Dict[str, Any]):
- self.init_kwargs = init_kwargs
-
- def __call__(self, settings: BaseSettings) -> Dict[str, Any]:
- return self.init_kwargs
-
- def __repr__(self) -> str:
- return f'InitSettingsSource(init_kwargs={self.init_kwargs!r})'
-
-
-class EnvSettingsSource:
- __slots__ = ('env_file', 'env_file_encoding', 'env_nested_delimiter', 'env_prefix_len')
-
- def __init__(
- self,
- env_file: Optional[DotenvType],
- env_file_encoding: Optional[str],
- env_nested_delimiter: Optional[str] = None,
- env_prefix_len: int = 0,
- ):
- self.env_file: Optional[DotenvType] = env_file
- self.env_file_encoding: Optional[str] = env_file_encoding
- self.env_nested_delimiter: Optional[str] = env_nested_delimiter
- self.env_prefix_len: int = env_prefix_len
-
- def __call__(self, settings: BaseSettings) -> Dict[str, Any]: # noqa C901
- """
- Build environment variables suitable for passing to the Model.
- """
- d: Dict[str, Any] = {}
-
- if settings.__config__.case_sensitive:
- env_vars: Mapping[str, Optional[str]] = os.environ
- else:
- env_vars = {k.lower(): v for k, v in os.environ.items()}
-
- dotenv_vars = self._read_env_files(settings.__config__.case_sensitive)
- if dotenv_vars:
- env_vars = {**dotenv_vars, **env_vars}
-
- for field in settings.__fields__.values():
- env_val: Optional[str] = None
- for env_name in field.field_info.extra['env_names']:
- env_val = env_vars.get(env_name)
- if env_val is not None:
- break
-
- is_complex, allow_parse_failure = self.field_is_complex(field)
- if is_complex:
- if env_val is None:
- # field is complex but no value found so far, try explode_env_vars
- env_val_built = self.explode_env_vars(field, env_vars)
- if env_val_built:
- d[field.alias] = env_val_built
- else:
- # field is complex and there's a value, decode that as JSON, then add explode_env_vars
- try:
- env_val = settings.__config__.parse_env_var(field.name, env_val)
- except ValueError as e:
- if not allow_parse_failure:
- raise SettingsError(f'error parsing env var "{env_name}"') from e
-
- if isinstance(env_val, dict):
- d[field.alias] = deep_update(env_val, self.explode_env_vars(field, env_vars))
- else:
- d[field.alias] = env_val
- elif env_val is not None:
- # simplest case, field is not complex, we only need to add the value if it was found
- d[field.alias] = env_val
-
- return d
-
- def _read_env_files(self, case_sensitive: bool) -> Dict[str, Optional[str]]:
- env_files = self.env_file
- if env_files is None:
- return {}
-
- if isinstance(env_files, (str, os.PathLike)):
- env_files = [env_files]
-
- dotenv_vars = {}
- for env_file in env_files:
- env_path = Path(env_file).expanduser()
- if env_path.is_file():
- dotenv_vars.update(
- read_env_file(env_path, encoding=self.env_file_encoding, case_sensitive=case_sensitive)
- )
-
- return dotenv_vars
-
- def field_is_complex(self, field: ModelField) -> Tuple[bool, bool]:
- """
- Find out if a field is complex, and if so whether JSON errors should be ignored
- """
- if lenient_issubclass(field.annotation, JsonWrapper):
- return False, False
-
- if field.is_complex():
- allow_parse_failure = False
- elif is_union(get_origin(field.type_)) and field.sub_fields and any(f.is_complex() for f in field.sub_fields):
- allow_parse_failure = True
- else:
- return False, False
-
- return True, allow_parse_failure
-
- def explode_env_vars(self, field: ModelField, env_vars: Mapping[str, Optional[str]]) -> Dict[str, Any]:
- """
- Process env_vars and extract the values of keys containing env_nested_delimiter into nested dictionaries.
-
- This is applied to a single field, hence filtering by env_var prefix.
- """
- prefixes = [f'{env_name}{self.env_nested_delimiter}' for env_name in field.field_info.extra['env_names']]
- result: Dict[str, Any] = {}
- for env_name, env_val in env_vars.items():
- if not any(env_name.startswith(prefix) for prefix in prefixes):
- continue
- # we remove the prefix before splitting in case the prefix has characters in common with the delimiter
- env_name_without_prefix = env_name[self.env_prefix_len :]
- _, *keys, last_key = env_name_without_prefix.split(self.env_nested_delimiter)
- env_var = result
- for key in keys:
- env_var = env_var.setdefault(key, {})
- env_var[last_key] = env_val
-
- return result
-
- def __repr__(self) -> str:
- return (
- f'EnvSettingsSource(env_file={self.env_file!r}, env_file_encoding={self.env_file_encoding!r}, '
- f'env_nested_delimiter={self.env_nested_delimiter!r})'
- )
-
-
-class SecretsSettingsSource:
- __slots__ = ('secrets_dir',)
-
- def __init__(self, secrets_dir: Optional[StrPath]):
- self.secrets_dir: Optional[StrPath] = secrets_dir
-
- def __call__(self, settings: BaseSettings) -> Dict[str, Any]:
- """
- Build fields from "secrets" files.
- """
- secrets: Dict[str, Optional[str]] = {}
-
- if self.secrets_dir is None:
- return secrets
-
- secrets_path = Path(self.secrets_dir).expanduser()
-
- if not secrets_path.exists():
- warnings.warn(f'directory "{secrets_path}" does not exist')
- return secrets
-
- if not secrets_path.is_dir():
- raise SettingsError(f'secrets_dir must reference a directory, not a {path_type(secrets_path)}')
-
- for field in settings.__fields__.values():
- for env_name in field.field_info.extra['env_names']:
- path = find_case_path(secrets_path, env_name, settings.__config__.case_sensitive)
- if not path:
- # path does not exist, we currently don't return a warning for this
- continue
-
- if path.is_file():
- secret_value = path.read_text().strip()
- if field.is_complex():
- try:
- secret_value = settings.__config__.parse_env_var(field.name, secret_value)
- except ValueError as e:
- raise SettingsError(f'error parsing env var "{env_name}"') from e
-
- secrets[field.alias] = secret_value
- else:
- warnings.warn(
- f'attempted to load secret file "{path}" but found a {path_type(path)} instead.',
- stacklevel=4,
- )
- return secrets
-
- def __repr__(self) -> str:
- return f'SecretsSettingsSource(secrets_dir={self.secrets_dir!r})'
-
-
-def read_env_file(
- file_path: StrPath, *, encoding: str = None, case_sensitive: bool = False
-) -> Dict[str, Optional[str]]:
- try:
- from dotenv import dotenv_values
- except ImportError as e:
- raise ImportError('python-dotenv is not installed, run `pip install pydantic[dotenv]`') from e
-
- file_vars: Dict[str, Optional[str]] = dotenv_values(file_path, encoding=encoding or 'utf8')
- if not case_sensitive:
- return {k.lower(): v for k, v in file_vars.items()}
- else:
- return file_vars
-
-
-def find_case_path(dir_path: Path, file_name: str, case_sensitive: bool) -> Optional[Path]:
- """
- Find a file within path's directory matching filename, optionally ignoring case.
- """
- for f in dir_path.iterdir():
- if f.name == file_name:
- return f
- elif not case_sensitive and f.name.lower() == file_name.lower():
- return f
- return None
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/functional.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/functional.py
deleted file mode 100644
index 6189dd2cc8d5deecadd775adddad64251238a155..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/functional.py
+++ /dev/null
@@ -1,20 +0,0 @@
-"""
- pygments.lexers.functional
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Just export lexer classes previously contained in this module.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.lexers.lisp import SchemeLexer, CommonLispLexer, RacketLexer, \
- NewLispLexer, ShenLexer
-from pygments.lexers.haskell import HaskellLexer, LiterateHaskellLexer, \
- KokaLexer
-from pygments.lexers.theorem import CoqLexer
-from pygments.lexers.erlang import ErlangLexer, ErlangShellLexer, \
- ElixirConsoleLexer, ElixirLexer
-from pygments.lexers.ml import SMLLexer, OcamlLexer, OpaLexer
-
-__all__ = []
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/ooc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/ooc.py
deleted file mode 100644
index c4600eaeed5c9e32bcebd894b1260082d16f1ec6..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/ooc.py
+++ /dev/null
@@ -1,85 +0,0 @@
-"""
- pygments.lexers.ooc
- ~~~~~~~~~~~~~~~~~~~
-
- Lexers for the Ooc language.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.lexer import RegexLexer, bygroups, words
-from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
- Number, Punctuation
-
-__all__ = ['OocLexer']
-
-
-class OocLexer(RegexLexer):
- """
- For Ooc source code
-
- .. versionadded:: 1.2
- """
- name = 'Ooc'
- url = 'http://ooc-lang.org/'
- aliases = ['ooc']
- filenames = ['*.ooc']
- mimetypes = ['text/x-ooc']
-
- tokens = {
- 'root': [
- (words((
- 'class', 'interface', 'implement', 'abstract', 'extends', 'from',
- 'this', 'super', 'new', 'const', 'final', 'static', 'import',
- 'use', 'extern', 'inline', 'proto', 'break', 'continue',
- 'fallthrough', 'operator', 'if', 'else', 'for', 'while', 'do',
- 'switch', 'case', 'as', 'in', 'version', 'return', 'true',
- 'false', 'null'), prefix=r'\b', suffix=r'\b'),
- Keyword),
- (r'include\b', Keyword, 'include'),
- (r'(cover)([ \t]+)(from)([ \t]+)(\w+[*@]?)',
- bygroups(Keyword, Text, Keyword, Text, Name.Class)),
- (r'(func)((?:[ \t]|\\\n)+)(~[a-z_]\w*)',
- bygroups(Keyword, Text, Name.Function)),
- (r'\bfunc\b', Keyword),
- # Note: %= and ^= not listed on http://ooc-lang.org/syntax
- (r'//.*', Comment),
- (r'(?s)/\*.*?\*/', Comment.Multiline),
- (r'(==?|\+=?|-[=>]?|\*=?|/=?|:=|!=?|%=?|\?|>{1,3}=?|<{1,3}=?|\.\.|'
- r'&&?|\|\|?|\^=?)', Operator),
- (r'(\.)([ \t]*)([a-z]\w*)', bygroups(Operator, Text,
- Name.Function)),
- (r'[A-Z][A-Z0-9_]+', Name.Constant),
- (r'[A-Z]\w*([@*]|\[[ \t]*\])?', Name.Class),
-
- (r'([a-z]\w*(?:~[a-z]\w*)?)((?:[ \t]|\\\n)*)(?=\()',
- bygroups(Name.Function, Text)),
- (r'[a-z]\w*', Name.Variable),
-
- # : introduces types
- (r'[:(){}\[\];,]', Punctuation),
-
- (r'0x[0-9a-fA-F]+', Number.Hex),
- (r'0c[0-9]+', Number.Oct),
- (r'0b[01]+', Number.Bin),
- (r'[0-9_]\.[0-9_]*(?!\.)', Number.Float),
- (r'[0-9_]+', Number.Decimal),
-
- (r'"(?:\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\"])*"',
- String.Double),
- (r"'(?:\\.|\\[0-9]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'",
- String.Char),
- (r'@', Punctuation), # pointer dereference
- (r'\.', Punctuation), # imports or chain operator
-
- (r'\\[ \t\n]', Text),
- (r'[ \t]+', Text),
- ],
- 'include': [
- (r'[\w/]+', Name),
- (r',', Punctuation),
- (r'[ \t]', Text),
- (r'[;\n]', Text, '#pop'),
- ],
- }
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/unicode.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/unicode.py
deleted file mode 100644
index b0a87b235beb12490a7d35353a61a65890f253be..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/unicode.py
+++ /dev/null
@@ -1,361 +0,0 @@
-# unicode.py
-
-import sys
-from itertools import filterfalse
-from typing import List, Tuple, Union
-
-
-class _lazyclassproperty:
- def __init__(self, fn):
- self.fn = fn
- self.__doc__ = fn.__doc__
- self.__name__ = fn.__name__
-
- def __get__(self, obj, cls):
- if cls is None:
- cls = type(obj)
- if not hasattr(cls, "_intern") or any(
- cls._intern is getattr(superclass, "_intern", [])
- for superclass in cls.__mro__[1:]
- ):
- cls._intern = {}
- attrname = self.fn.__name__
- if attrname not in cls._intern:
- cls._intern[attrname] = self.fn(cls)
- return cls._intern[attrname]
-
-
-UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]]
-
-
-class unicode_set:
- """
- A set of Unicode characters, for language-specific strings for
- ``alphas``, ``nums``, ``alphanums``, and ``printables``.
- A unicode_set is defined by a list of ranges in the Unicode character
- set, in a class attribute ``_ranges``. Ranges can be specified using
- 2-tuples or a 1-tuple, such as::
-
- _ranges = [
- (0x0020, 0x007e),
- (0x00a0, 0x00ff),
- (0x0100,),
- ]
-
- Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x).
-
- A unicode set can also be defined using multiple inheritance of other unicode sets::
-
- class CJK(Chinese, Japanese, Korean):
- pass
- """
-
- _ranges: UnicodeRangeList = []
-
- @_lazyclassproperty
- def _chars_for_ranges(cls):
- ret = []
- for cc in cls.__mro__:
- if cc is unicode_set:
- break
- for rr in getattr(cc, "_ranges", ()):
- ret.extend(range(rr[0], rr[-1] + 1))
- return [chr(c) for c in sorted(set(ret))]
-
- @_lazyclassproperty
- def printables(cls):
- """all non-whitespace characters in this range"""
- return "".join(filterfalse(str.isspace, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphas(cls):
- """all alphabetic characters in this range"""
- return "".join(filter(str.isalpha, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def nums(cls):
- """all numeric digit characters in this range"""
- return "".join(filter(str.isdigit, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphanums(cls):
- """all alphanumeric characters in this range"""
- return cls.alphas + cls.nums
-
- @_lazyclassproperty
- def identchars(cls):
- """all characters in this range that are valid identifier characters, plus underscore '_'"""
- return "".join(
- sorted(
- set(
- "".join(filter(str.isidentifier, cls._chars_for_ranges))
- + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº"
- + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ"
- + "_"
- )
- )
- )
-
- @_lazyclassproperty
- def identbodychars(cls):
- """
- all characters in this range that are valid identifier body characters,
- plus the digits 0-9, and · (Unicode MIDDLE DOT)
- """
- return "".join(
- sorted(
- set(
- cls.identchars
- + "0123456789·"
- + "".join(
- [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()]
- )
- )
- )
- )
-
- @_lazyclassproperty
- def identifier(cls):
- """
- a pyparsing Word expression for an identifier using this range's definitions for
- identchars and identbodychars
- """
- from pyparsing import Word
-
- return Word(cls.identchars, cls.identbodychars)
-
-
-class pyparsing_unicode(unicode_set):
- """
- A namespace class for defining common language unicode_sets.
- """
-
- # fmt: off
-
- # define ranges in language character sets
- _ranges: UnicodeRangeList = [
- (0x0020, sys.maxunicode),
- ]
-
- class BasicMultilingualPlane(unicode_set):
- """Unicode set for the Basic Multilingual Plane"""
- _ranges: UnicodeRangeList = [
- (0x0020, 0xFFFF),
- ]
-
- class Latin1(unicode_set):
- """Unicode set for Latin-1 Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0020, 0x007E),
- (0x00A0, 0x00FF),
- ]
-
- class LatinA(unicode_set):
- """Unicode set for Latin-A Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0100, 0x017F),
- ]
-
- class LatinB(unicode_set):
- """Unicode set for Latin-B Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0180, 0x024F),
- ]
-
- class Greek(unicode_set):
- """Unicode set for Greek Unicode Character Ranges"""
- _ranges: UnicodeRangeList = [
- (0x0342, 0x0345),
- (0x0370, 0x0377),
- (0x037A, 0x037F),
- (0x0384, 0x038A),
- (0x038C,),
- (0x038E, 0x03A1),
- (0x03A3, 0x03E1),
- (0x03F0, 0x03FF),
- (0x1D26, 0x1D2A),
- (0x1D5E,),
- (0x1D60,),
- (0x1D66, 0x1D6A),
- (0x1F00, 0x1F15),
- (0x1F18, 0x1F1D),
- (0x1F20, 0x1F45),
- (0x1F48, 0x1F4D),
- (0x1F50, 0x1F57),
- (0x1F59,),
- (0x1F5B,),
- (0x1F5D,),
- (0x1F5F, 0x1F7D),
- (0x1F80, 0x1FB4),
- (0x1FB6, 0x1FC4),
- (0x1FC6, 0x1FD3),
- (0x1FD6, 0x1FDB),
- (0x1FDD, 0x1FEF),
- (0x1FF2, 0x1FF4),
- (0x1FF6, 0x1FFE),
- (0x2129,),
- (0x2719, 0x271A),
- (0xAB65,),
- (0x10140, 0x1018D),
- (0x101A0,),
- (0x1D200, 0x1D245),
- (0x1F7A1, 0x1F7A7),
- ]
-
- class Cyrillic(unicode_set):
- """Unicode set for Cyrillic Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0400, 0x052F),
- (0x1C80, 0x1C88),
- (0x1D2B,),
- (0x1D78,),
- (0x2DE0, 0x2DFF),
- (0xA640, 0xA672),
- (0xA674, 0xA69F),
- (0xFE2E, 0xFE2F),
- ]
-
- class Chinese(unicode_set):
- """Unicode set for Chinese Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x2E80, 0x2E99),
- (0x2E9B, 0x2EF3),
- (0x31C0, 0x31E3),
- (0x3400, 0x4DB5),
- (0x4E00, 0x9FEF),
- (0xA700, 0xA707),
- (0xF900, 0xFA6D),
- (0xFA70, 0xFAD9),
- (0x16FE2, 0x16FE3),
- (0x1F210, 0x1F212),
- (0x1F214, 0x1F23B),
- (0x1F240, 0x1F248),
- (0x20000, 0x2A6D6),
- (0x2A700, 0x2B734),
- (0x2B740, 0x2B81D),
- (0x2B820, 0x2CEA1),
- (0x2CEB0, 0x2EBE0),
- (0x2F800, 0x2FA1D),
- ]
-
- class Japanese(unicode_set):
- """Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges"""
-
- class Kanji(unicode_set):
- "Unicode set for Kanji Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x4E00, 0x9FBF),
- (0x3000, 0x303F),
- ]
-
- class Hiragana(unicode_set):
- """Unicode set for Hiragana Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x3041, 0x3096),
- (0x3099, 0x30A0),
- (0x30FC,),
- (0xFF70,),
- (0x1B001,),
- (0x1B150, 0x1B152),
- (0x1F200,),
- ]
-
- class Katakana(unicode_set):
- """Unicode set for Katakana Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x3099, 0x309C),
- (0x30A0, 0x30FF),
- (0x31F0, 0x31FF),
- (0x32D0, 0x32FE),
- (0xFF65, 0xFF9F),
- (0x1B000,),
- (0x1B164, 0x1B167),
- (0x1F201, 0x1F202),
- (0x1F213,),
- ]
-
- 漢字 = Kanji
- カタカナ = Katakana
- ひらがな = Hiragana
-
- _ranges = (
- Kanji._ranges
- + Hiragana._ranges
- + Katakana._ranges
- )
-
- class Hangul(unicode_set):
- """Unicode set for Hangul (Korean) Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x1100, 0x11FF),
- (0x302E, 0x302F),
- (0x3131, 0x318E),
- (0x3200, 0x321C),
- (0x3260, 0x327B),
- (0x327E,),
- (0xA960, 0xA97C),
- (0xAC00, 0xD7A3),
- (0xD7B0, 0xD7C6),
- (0xD7CB, 0xD7FB),
- (0xFFA0, 0xFFBE),
- (0xFFC2, 0xFFC7),
- (0xFFCA, 0xFFCF),
- (0xFFD2, 0xFFD7),
- (0xFFDA, 0xFFDC),
- ]
-
- Korean = Hangul
-
- class CJK(Chinese, Japanese, Hangul):
- """Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range"""
-
- class Thai(unicode_set):
- """Unicode set for Thai Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0E01, 0x0E3A),
- (0x0E3F, 0x0E5B)
- ]
-
- class Arabic(unicode_set):
- """Unicode set for Arabic Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0600, 0x061B),
- (0x061E, 0x06FF),
- (0x0700, 0x077F),
- ]
-
- class Hebrew(unicode_set):
- """Unicode set for Hebrew Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0591, 0x05C7),
- (0x05D0, 0x05EA),
- (0x05EF, 0x05F4),
- (0xFB1D, 0xFB36),
- (0xFB38, 0xFB3C),
- (0xFB3E,),
- (0xFB40, 0xFB41),
- (0xFB43, 0xFB44),
- (0xFB46, 0xFB4F),
- ]
-
- class Devanagari(unicode_set):
- """Unicode set for Devanagari Unicode Character Range"""
- _ranges: UnicodeRangeList = [
- (0x0900, 0x097F),
- (0xA8E0, 0xA8FF)
- ]
-
- BMP = BasicMultilingualPlane
-
- # add language identifiers using language Unicode
- العربية = Arabic
- 中文 = Chinese
- кириллица = Cyrillic
- Ελληνικά = Greek
- עִברִית = Hebrew
- 日本語 = Japanese
- 한국어 = Korean
- ไทย = Thai
- देवनागरी = Devanagari
-
- # fmt: on
diff --git a/spaces/pycui/RealChar/alembic/versions/eced1ae3918a_add_string_user_id.py b/spaces/pycui/RealChar/alembic/versions/eced1ae3918a_add_string_user_id.py
deleted file mode 100644
index 3baf2fcbf4c10fe295d1d4f01dc3177384ee5d4a..0000000000000000000000000000000000000000
--- a/spaces/pycui/RealChar/alembic/versions/eced1ae3918a_add_string_user_id.py
+++ /dev/null
@@ -1,35 +0,0 @@
-"""Add string user ID
-
-Revision ID: eced1ae3918a
-Revises: 3821f7adaca9
-Create Date: 2023-07-19 11:02:52.002939
-
-"""
-from alembic import op
-import sqlalchemy as sa
-
-
-# revision identifiers, used by Alembic.
-revision = 'eced1ae3918a'
-down_revision = '3821f7adaca9'
-branch_labels = None
-depends_on = None
-
-
-def upgrade() -> None:
- op.add_column('interactions', sa.Column(
- 'user_id', sa.String(50), nullable=True))
-
- # Populate the new column with the old column's data
- op.execute("""
- UPDATE interactions
- SET user_id = CAST(client_id AS TEXT)
- """)
-
- # TODO: make the user_id column non-nullable after prod migration.
- # Skip for now given production servers are distributed. Note this is not
- # relevant if you deploy locally.
-
-
-def downgrade() -> None:
- op.drop_column('interactions', 'user_id')
diff --git a/spaces/qingxu98/gpt-academic/request_llm/edge_gpt.py b/spaces/qingxu98/gpt-academic/request_llm/edge_gpt.py
deleted file mode 100644
index bbf84000d84a42de80d3c051a24f06336af76aaf..0000000000000000000000000000000000000000
--- a/spaces/qingxu98/gpt-academic/request_llm/edge_gpt.py
+++ /dev/null
@@ -1,409 +0,0 @@
-"""
-========================================================================
-第一部分:来自EdgeGPT.py
-https://github.com/acheong08/EdgeGPT
-========================================================================
-"""
-
-import argparse
-import asyncio
-import json
-import os
-import random
-import re
-import ssl
-import sys
-import uuid
-from enum import Enum
-from typing import Generator
-from typing import Literal
-from typing import Optional
-from typing import Union
-import websockets.client as websockets
-
-DELIMITER = "\x1e"
-
-
-# Generate random IP between range 13.104.0.0/14
-FORWARDED_IP = (
- f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
-)
-
-HEADERS = {
- "accept": "application/json",
- "accept-language": "en-US,en;q=0.9",
- "content-type": "application/json",
- "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
- "sec-ch-ua-arch": '"x86"',
- "sec-ch-ua-bitness": '"64"',
- "sec-ch-ua-full-version": '"109.0.1518.78"',
- "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-model": "",
- "sec-ch-ua-platform": '"Windows"',
- "sec-ch-ua-platform-version": '"15.0.0"',
- "sec-fetch-dest": "empty",
- "sec-fetch-mode": "cors",
- "sec-fetch-site": "same-origin",
- "x-ms-client-request-id": str(uuid.uuid4()),
- "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
- "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
- "Referrer-Policy": "origin-when-cross-origin",
- "x-forwarded-for": FORWARDED_IP,
-}
-
-HEADERS_INIT_CONVER = {
- "authority": "edgeservices.bing.com",
- "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
- "accept-language": "en-US,en;q=0.9",
- "cache-control": "max-age=0",
- "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
- "sec-ch-ua-arch": '"x86"',
- "sec-ch-ua-bitness": '"64"',
- "sec-ch-ua-full-version": '"110.0.1587.69"',
- "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-model": '""',
- "sec-ch-ua-platform": '"Windows"',
- "sec-ch-ua-platform-version": '"15.0.0"',
- "sec-fetch-dest": "document",
- "sec-fetch-mode": "navigate",
- "sec-fetch-site": "none",
- "sec-fetch-user": "?1",
- "upgrade-insecure-requests": "1",
- "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
- "x-edge-shopping-flag": "1",
- "x-forwarded-for": FORWARDED_IP,
-}
-
-def get_ssl_context():
- import certifi
- ssl_context = ssl.create_default_context()
- ssl_context.load_verify_locations(certifi.where())
- return ssl_context
-
-
-
-class NotAllowedToAccess(Exception):
- pass
-
-
-class ConversationStyle(Enum):
- creative = "h3imaginative,clgalileo,gencontentv3"
- balanced = "galileo"
- precise = "h3precise,clgalileo"
-
-
-CONVERSATION_STYLE_TYPE = Optional[
- Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
-]
-
-
-def _append_identifier(msg: dict) -> str:
- """
- Appends special character to end of message to identify end of message
- """
- # Convert dict to json string
- return json.dumps(msg) + DELIMITER
-
-
-def _get_ran_hex(length: int = 32) -> str:
- """
- Returns random hex string
- """
- return "".join(random.choice("0123456789abcdef") for _ in range(length))
-
-
-class _ChatHubRequest:
- """
- Request object for ChatHub
- """
-
- def __init__(
- self,
- conversation_signature: str,
- client_id: str,
- conversation_id: str,
- invocation_id: int = 0,
- ) -> None:
- self.struct: dict = {}
-
- self.client_id: str = client_id
- self.conversation_id: str = conversation_id
- self.conversation_signature: str = conversation_signature
- self.invocation_id: int = invocation_id
-
- def update(
- self,
- prompt,
- conversation_style,
- options,
- ) -> None:
- """
- Updates request object
- """
- if options is None:
- options = [
- "deepleo",
- "enable_debug_commands",
- "disable_emoji_spoken_text",
- "enablemm",
- ]
- if conversation_style:
- if not isinstance(conversation_style, ConversationStyle):
- conversation_style = getattr(ConversationStyle, conversation_style)
- options = [
- "nlu_direct_response_filter",
- "deepleo",
- "disable_emoji_spoken_text",
- "responsible_ai_policy_235",
- "enablemm",
- conversation_style.value,
- "dtappid",
- "cricinfo",
- "cricinfov2",
- "dv3sugg",
- ]
- self.struct = {
- "arguments": [
- {
- "source": "cib",
- "optionsSets": options,
- "sliceIds": [
- "222dtappid",
- "225cricinfo",
- "224locals0",
- ],
- "traceId": _get_ran_hex(32),
- "isStartOfSession": self.invocation_id == 0,
- "message": {
- "author": "user",
- "inputMethod": "Keyboard",
- "text": prompt,
- "messageType": "Chat",
- },
- "conversationSignature": self.conversation_signature,
- "participant": {
- "id": self.client_id,
- },
- "conversationId": self.conversation_id,
- },
- ],
- "invocationId": str(self.invocation_id),
- "target": "chat",
- "type": 4,
- }
- self.invocation_id += 1
-
-
-class _Conversation:
- """
- Conversation API
- """
-
- def __init__(
- self,
- cookies,
- proxy,
- ) -> None:
- self.struct: dict = {
- "conversationId": None,
- "clientId": None,
- "conversationSignature": None,
- "result": {"value": "Success", "message": None},
- }
- import httpx
- self.proxy = proxy
- proxy = (
- proxy
- or os.environ.get("all_proxy")
- or os.environ.get("ALL_PROXY")
- or os.environ.get("https_proxy")
- or os.environ.get("HTTPS_PROXY")
- or None
- )
- if proxy is not None and proxy.startswith("socks5h://"):
- proxy = "socks5://" + proxy[len("socks5h://") :]
- self.session = httpx.Client(
- proxies=proxy,
- timeout=30,
- headers=HEADERS_INIT_CONVER,
- )
- for cookie in cookies:
- self.session.cookies.set(cookie["name"], cookie["value"])
-
- # Send GET request
- response = self.session.get(
- url=os.environ.get("BING_PROXY_URL")
- or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- response = self.session.get(
- "https://edge.churchless.tech/edgesvc/turing/conversation/create",
- )
- if response.status_code != 200:
- print(f"Status code: {response.status_code}")
- print(response.text)
- print(response.url)
- raise Exception("Authentication failed")
- try:
- self.struct = response.json()
- except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
- raise Exception(
- "Authentication failed. You have not been accepted into the beta.",
- ) from exc
- if self.struct["result"]["value"] == "UnauthorizedRequest":
- raise NotAllowedToAccess(self.struct["result"]["message"])
-
-
-class _ChatHub:
- """
- Chat API
- """
-
- def __init__(self, conversation) -> None:
- self.wss = None
- self.request: _ChatHubRequest
- self.loop: bool
- self.task: asyncio.Task
- print(conversation.struct)
- self.request = _ChatHubRequest(
- conversation_signature=conversation.struct["conversationSignature"],
- client_id=conversation.struct["clientId"],
- conversation_id=conversation.struct["conversationId"],
- )
-
- async def ask_stream(
- self,
- prompt: str,
- wss_link: str,
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- raw: bool = False,
- options: dict = None,
- ) -> Generator[str, None, None]:
- """
- Ask a question to the bot
- """
- if self.wss and not self.wss.closed:
- await self.wss.close()
- # Check if websocket is closed
- self.wss = await websockets.connect(
- wss_link,
- extra_headers=HEADERS,
- max_size=None,
- ssl=get_ssl_context()
- )
- await self._initial_handshake()
- # Construct a ChatHub request
- self.request.update(
- prompt=prompt,
- conversation_style=conversation_style,
- options=options,
- )
- # Send request
- await self.wss.send(_append_identifier(self.request.struct))
- final = False
- while not final:
- objects = str(await self.wss.recv()).split(DELIMITER)
- for obj in objects:
- if obj is None or not obj:
- continue
- response = json.loads(obj)
- if response.get("type") != 2 and raw:
- yield False, response
- elif response.get("type") == 1 and response["arguments"][0].get(
- "messages",
- ):
- resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
- 0
- ]["body"][0].get("text")
- yield False, resp_txt
- elif response.get("type") == 2:
- final = True
- yield True, response
-
- async def _initial_handshake(self) -> None:
- await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
- await self.wss.recv()
-
- async def close(self) -> None:
- """
- Close the connection
- """
- if self.wss and not self.wss.closed:
- await self.wss.close()
-
-
-class NewbingChatbot:
- """
- Combines everything to make it seamless
- """
-
- def __init__(
- self,
- cookies,
- proxy
- ) -> None:
- if cookies is None:
- cookies = {}
- self.cookies = cookies
- self.proxy = proxy
- self.chat_hub: _ChatHub = _ChatHub(
- _Conversation(self.cookies, self.proxy),
- )
-
- async def ask(
- self,
- prompt: str,
- wss_link: str,
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- options: dict = None,
- ) -> dict:
- """
- Ask a question to the bot
- """
- async for final, response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- options=options,
- ):
- if final:
- return response
- await self.chat_hub.wss.close()
- return None
-
- async def ask_stream(
- self,
- prompt: str,
- wss_link: str,
- conversation_style: CONVERSATION_STYLE_TYPE = None,
- raw: bool = False,
- options: dict = None,
- ) -> Generator[str, None, None]:
- """
- Ask a question to the bot
- """
- async for response in self.chat_hub.ask_stream(
- prompt=prompt,
- conversation_style=conversation_style,
- wss_link=wss_link,
- raw=raw,
- options=options,
- ):
- yield response
-
- async def close(self) -> None:
- """
- Close the connection
- """
- await self.chat_hub.close()
-
- async def reset(self) -> None:
- """
- Reset the conversation
- """
- await self.close()
- self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe.CS3.Web.Premium.Crack.Collection.Final [HOT] Free Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe.CS3.Web.Premium.Crack.Collection.Final [HOT] Free Download.md
deleted file mode 100644
index 8c9afb833b509577ed69a35adef558c9e922b3c7..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe.CS3.Web.Premium.Crack.Collection.Final [HOT] Free Download.md
+++ /dev/null
@@ -1,60 +0,0 @@
-Adobe.CS3.Web.Premium.Crack.Collection.Final Free Download DOWNLOAD ✦ https://geags.com/2uCrzK
-
-The download is approximately 809MB in size.
-
-This Adobe Illustrator tutorial demonstrates how to create a watercolor-like effect in Adobe Illustrator CS4. The tutorial makes use of both the Pen Tool and the Paint Bucket to create the effect.
-
-This tutorial shows the process of creating a transition effect in Adobe Photoshop CS4. It uses a Solid Color fill layer, adding a gradient overlay to the layer for ease of coloring. If you use this for your own personal use you must obtain the tutorial permission from Mr. Gamez.erose> thnx/kay
-
- sebastian if you dont have "Install with lxde" as the third option, choose that
-
- mgerdts: that's kind of trivial with the ubuntu installer
-
- I have it as the third option leftyfb
-
- When I get to the Choose a language
-
- Then that appears
-
- And I dont know how to proceed
-
- Anyways sorry for being stupid
-
- I really apreciate the effort
-
- Guess I have to try again later
-
- sebastian: please check your language and click "install"
-
- sebastian if you use Windows, the linux support group might be a good place to go
-
- mgerdts: we are here to help people with Ubuntu
-
- Ok leftyfb
-
- Im installing now
-
- Wish me luck
-
- sorry leftyfb, i thought you were asking his OS
-
- Thanks mgerdts
-
- lol
-
- sebastian, no problem
-
- mgerdts: yes, this is a support channel for Ubuntu, not his os. Try #ubuntu-offtopic for general discussion
-
- leftyfb Thanks :D
-
- We can work on the other stuff now
-
- if you dont mind
-
- I have a question tho
-
- why was 4fefd39f24
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Script Pdf.md b/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Script Pdf.md
deleted file mode 100644
index 40f6c17b7a9d587ed2f00486d532826aa2d3c93a..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Script Pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Apocalypto Script Pdf DOWNLOAD ✺✺✺ https://geags.com/2uCsKq
-
-At the heart of 'A Vampire Story', and Buffini's screenplay for Byzantium, is a mother-daughter relationship. ... elements that are appropriate to the genre, and the script developed from there and became this ... Apocalypto (Prosthetic Make-Up). 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Ethno World 5 Vst Player Torrent Download [2021].md b/spaces/quidiaMuxgu/Expedit-SAM/Ethno World 5 Vst Player Torrent Download [2021].md
deleted file mode 100644
index c376dc00c2fbf9ac16dc89a41a7989f5726ecef5..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Ethno World 5 Vst Player Torrent Download [2021].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-one of the best things about the kontakt player is that it is totally compatible with the new kontakt 5 player, which is an extremely powerful and powerful digital audio workstation. when the kontakt player was originally released, it didn't have a lot of those integrated plug-ins that are in kontakt 5. however, with the release of version 5, they added a huge number of plug-ins that are great for use with kontakt. you can use a panning function to pan sounds within the instrument. you can use the delay plug-in. you can use any of the new convolution reverb plug-ins or the automated reverb function. you can have any of the filters. you can have the the new virtual string plug-in. you can have the new plug-in to add things like sustain and attack to the sound. you can have the mod wheel. you can have the vibrato. you can have the distortion. you can have the compression. you can have almost anything that is in kontakt 5 that is not in the kontakt player. you can also control the routing of the sound.
-ethno world 5 vst player torrent download DOWNLOAD → https://geags.com/2uCst0
-the kontakt player is designed to be an extremely versatile program. it works great for a lot of different applications. the kontakt player has some features that really set it apart from other sample player applications. the main one is the fact that it is a fully integrated sample player. it doesn't need to use another program to be able to play back samples. if you use your daw as a sample player, then you can integrate the kontakt player and you can have the audio routed directly to your daw as well as to the mix. it has a lot of features that just make it a very versatile sample player. it allows you to have an additional stereo panning function that can be used to pan the sound around the room. you can have a time-stretch function that will allow you to adjust the length of the sound. you can control the pitch bend. you can control the volume of the sound.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py
deleted file mode 100644
index cfd01a174978d97180a897e40cb59ecadec1d12e..0000000000000000000000000000000000000000
--- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/dataset.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from tqdm import tqdm
-
-from . import spec_utils
-
-
-class VocalRemoverValidationSet(torch.utils.data.Dataset):
- def __init__(self, patch_list):
- self.patch_list = patch_list
-
- def __len__(self):
- return len(self.patch_list)
-
- def __getitem__(self, idx):
- path = self.patch_list[idx]
- data = np.load(path)
-
- X, y = data["X"], data["y"]
-
- X_mag = np.abs(X)
- y_mag = np.abs(y)
-
- return X_mag, y_mag
-
-
-def make_pair(mix_dir, inst_dir):
- input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"]
-
- X_list = sorted(
- [
- os.path.join(mix_dir, fname)
- for fname in os.listdir(mix_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
- y_list = sorted(
- [
- os.path.join(inst_dir, fname)
- for fname in os.listdir(inst_dir)
- if os.path.splitext(fname)[1] in input_exts
- ]
- )
-
- filelist = list(zip(X_list, y_list))
-
- return filelist
-
-
-def train_val_split(dataset_dir, split_mode, val_rate, val_filelist):
- if split_mode == "random":
- filelist = make_pair(
- os.path.join(dataset_dir, "mixtures"),
- os.path.join(dataset_dir, "instruments"),
- )
-
- random.shuffle(filelist)
-
- if len(val_filelist) == 0:
- val_size = int(len(filelist) * val_rate)
- train_filelist = filelist[:-val_size]
- val_filelist = filelist[-val_size:]
- else:
- train_filelist = [
- pair for pair in filelist if list(pair) not in val_filelist
- ]
- elif split_mode == "subdirs":
- if len(val_filelist) != 0:
- raise ValueError(
- "The `val_filelist` option is not available in `subdirs` mode"
- )
-
- train_filelist = make_pair(
- os.path.join(dataset_dir, "training/mixtures"),
- os.path.join(dataset_dir, "training/instruments"),
- )
-
- val_filelist = make_pair(
- os.path.join(dataset_dir, "validation/mixtures"),
- os.path.join(dataset_dir, "validation/instruments"),
- )
-
- return train_filelist, val_filelist
-
-
-def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha):
- perm = np.random.permutation(len(X))
- for i, idx in enumerate(tqdm(perm)):
- if np.random.uniform() < reduction_rate:
- y[idx] = spec_utils.reduce_vocal_aggressively(
- X[idx], y[idx], reduction_mask
- )
-
- if np.random.uniform() < 0.5:
- # swap channel
- X[idx] = X[idx, ::-1]
- y[idx] = y[idx, ::-1]
- if np.random.uniform() < 0.02:
- # mono
- X[idx] = X[idx].mean(axis=0, keepdims=True)
- y[idx] = y[idx].mean(axis=0, keepdims=True)
- if np.random.uniform() < 0.02:
- # inst
- X[idx] = y[idx]
-
- if np.random.uniform() < mixup_rate and i < len(perm) - 1:
- lam = np.random.beta(mixup_alpha, mixup_alpha)
- X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]]
- y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]]
-
- return X, y
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset):
- len_dataset = patches * len(filelist)
-
- X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
- y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches)
- ends = starts + cropsize
- for j in range(patches):
- idx = i * patches + j
- X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]]
- y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]]
-
- return X_dataset, y_dataset
-
-
-def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset):
- patch_list = []
- patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format(
- cropsize, sr, hop_length, n_fft, offset
- )
- os.makedirs(patch_dir, exist_ok=True)
-
- for i, (X_path, y_path) in enumerate(tqdm(filelist)):
- basename = os.path.splitext(os.path.basename(X_path))[0]
-
- X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft)
- coef = np.max([np.abs(X).max(), np.abs(y).max()])
- X, y = X / coef, y / coef
-
- l, r, roi_size = make_padding(X.shape[2], cropsize, offset)
- X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant")
- y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant")
-
- len_dataset = int(np.ceil(X.shape[2] / roi_size))
- for j in range(len_dataset):
- outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j))
- start = j * roi_size
- if not os.path.exists(outpath):
- np.savez(
- outpath,
- X=X_pad[:, :, start : start + cropsize],
- y=y_pad[:, :, start : start + cropsize],
- )
- patch_list.append(outpath)
-
- return VocalRemoverValidationSet(patch_list)
diff --git a/spaces/r3gm/RVC_HF/infer/lib/rmvpe.py b/spaces/r3gm/RVC_HF/infer/lib/rmvpe.py
deleted file mode 100644
index 2a387ebe73c7e1dd8bb7ccad1ea9e0ea89848ece..0000000000000000000000000000000000000000
--- a/spaces/r3gm/RVC_HF/infer/lib/rmvpe.py
+++ /dev/null
@@ -1,717 +0,0 @@
-import pdb, os
-
-import numpy as np
-import torch
-try:
- #Fix "Torch not compiled with CUDA enabled"
- import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
- if torch.xpu.is_available():
- from infer.modules.ipex import ipex_init
- ipex_init()
-except Exception:
- pass
-import torch.nn as nn
-import torch.nn.functional as F
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py
-def window_sumsquare(
- window,
- n_frames,
- hop_length=200,
- win_length=800,
- n_fft=800,
- dtype=np.float32,
- norm=None,
-):
- """
- # from librosa 0.6
- Compute the sum-square envelope of a window function at a given hop length.
- This is used to estimate modulation effects induced by windowing
- observations in short-time fourier transforms.
- Parameters
- ----------
- window : string, tuple, number, callable, or list-like
- Window specification, as in `get_window`
- n_frames : int > 0
- The number of analysis frames
- hop_length : int > 0
- The number of samples to advance between frames
- win_length : [optional]
- The length of the window function. By default, this matches `n_fft`.
- n_fft : int > 0
- The length of each analysis frame.
- dtype : np.dtype
- The data type of the output
- Returns
- -------
- wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
- The sum-squared envelope of the window function
- """
- if win_length is None:
- win_length = n_fft
-
- n = n_fft + hop_length * (n_frames - 1)
- x = np.zeros(n, dtype=dtype)
-
- # Compute the squared window at the desired length
- win_sq = get_window(window, win_length, fftbins=True)
- win_sq = normalize(win_sq, norm=norm) ** 2
- win_sq = pad_center(win_sq, n_fft)
-
- # Fill the envelope
- for i in range(n_frames):
- sample = i * hop_length
- x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))]
- return x
-
-
-class STFT(torch.nn.Module):
- def __init__(
- self, filter_length=1024, hop_length=512, win_length=None, window="hann"
- ):
- """
- This module implements an STFT using 1D convolution and 1D transpose convolutions.
- This is a bit tricky so there are some cases that probably won't work as working
- out the same sizes before and after in all overlap add setups is tough. Right now,
- this code should work with hop lengths that are half the filter length (50% overlap
- between frames).
-
- Keyword Arguments:
- filter_length {int} -- Length of filters used (default: {1024})
- hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512})
- win_length {[type]} -- Length of the window function applied to each frame (if not specified, it
- equals the filter length). (default: {None})
- window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris)
- (default: {'hann'})
- """
- super(STFT, self).__init__()
- self.filter_length = filter_length
- self.hop_length = hop_length
- self.win_length = win_length if win_length else filter_length
- self.window = window
- self.forward_transform = None
- self.pad_amount = int(self.filter_length / 2)
- scale = self.filter_length / self.hop_length
- fourier_basis = np.fft.fft(np.eye(self.filter_length))
-
- cutoff = int((self.filter_length / 2 + 1))
- fourier_basis = np.vstack(
- [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
- )
- forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
- inverse_basis = torch.FloatTensor(
- np.linalg.pinv(scale * fourier_basis).T[:, None, :]
- )
-
- assert filter_length >= self.win_length
- # get window and zero center pad it to filter_length
- fft_window = get_window(window, self.win_length, fftbins=True)
- fft_window = pad_center(fft_window, size=filter_length)
- fft_window = torch.from_numpy(fft_window).float()
-
- # window the bases
- forward_basis *= fft_window
- inverse_basis *= fft_window
-
- self.register_buffer("forward_basis", forward_basis.float())
- self.register_buffer("inverse_basis", inverse_basis.float())
-
- def transform(self, input_data):
- """Take input data (audio) to STFT domain.
-
- Arguments:
- input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples)
-
- Returns:
- magnitude {tensor} -- Magnitude of STFT with shape (num_batch,
- num_frequencies, num_frames)
- phase {tensor} -- Phase of STFT with shape (num_batch,
- num_frequencies, num_frames)
- """
- num_batches = input_data.shape[0]
- num_samples = input_data.shape[-1]
-
- self.num_samples = num_samples
-
- # similar to librosa, reflect-pad the input
- input_data = input_data.view(num_batches, 1, num_samples)
- # print(1234,input_data.shape)
- input_data = F.pad(
- input_data.unsqueeze(1),
- (self.pad_amount, self.pad_amount, 0, 0, 0, 0),
- mode="reflect",
- ).squeeze(1)
- # print(2333,input_data.shape,self.forward_basis.shape,self.hop_length)
- # pdb.set_trace()
- forward_transform = F.conv1d(
- input_data, self.forward_basis, stride=self.hop_length, padding=0
- )
-
- cutoff = int((self.filter_length / 2) + 1)
- real_part = forward_transform[:, :cutoff, :]
- imag_part = forward_transform[:, cutoff:, :]
-
- magnitude = torch.sqrt(real_part**2 + imag_part**2)
- # phase = torch.atan2(imag_part.data, real_part.data)
-
- return magnitude # , phase
-
- def inverse(self, magnitude, phase):
- """Call the inverse STFT (iSTFT), given magnitude and phase tensors produced
- by the ```transform``` function.
-
- Arguments:
- magnitude {tensor} -- Magnitude of STFT with shape (num_batch,
- num_frequencies, num_frames)
- phase {tensor} -- Phase of STFT with shape (num_batch,
- num_frequencies, num_frames)
-
- Returns:
- inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of
- shape (num_batch, num_samples)
- """
- recombine_magnitude_phase = torch.cat(
- [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1
- )
-
- inverse_transform = F.conv_transpose1d(
- recombine_magnitude_phase,
- self.inverse_basis,
- stride=self.hop_length,
- padding=0,
- )
-
- if self.window is not None:
- window_sum = window_sumsquare(
- self.window,
- magnitude.size(-1),
- hop_length=self.hop_length,
- win_length=self.win_length,
- n_fft=self.filter_length,
- dtype=np.float32,
- )
- # remove modulation effects
- approx_nonzero_indices = torch.from_numpy(
- np.where(window_sum > tiny(window_sum))[0]
- )
- window_sum = torch.from_numpy(window_sum).to(inverse_transform.device)
- inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
- approx_nonzero_indices
- ]
-
- # scale by hop ratio
- inverse_transform *= float(self.filter_length) / self.hop_length
-
- inverse_transform = inverse_transform[..., self.pad_amount :]
- inverse_transform = inverse_transform[..., : self.num_samples]
- inverse_transform = inverse_transform.squeeze(1)
-
- return inverse_transform
-
- def forward(self, input_data):
- """Take input data (audio) to STFT domain and then back to audio.
-
- Arguments:
- input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples)
-
- Returns:
- reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of
- shape (num_batch, num_samples)
- """
- self.magnitude, self.phase = self.transform(input_data)
- reconstruction = self.inverse(self.magnitude, self.phase)
- return reconstruction
-
-
-from time import time as ttime
-
-
-class BiGRU(nn.Module):
- def __init__(self, input_features, hidden_features, num_layers):
- super(BiGRU, self).__init__()
- self.gru = nn.GRU(
- input_features,
- hidden_features,
- num_layers=num_layers,
- batch_first=True,
- bidirectional=True,
- )
-
- def forward(self, x):
- return self.gru(x)[0]
-
-
-class ConvBlockRes(nn.Module):
- def __init__(self, in_channels, out_channels, momentum=0.01):
- super(ConvBlockRes, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- if in_channels != out_channels:
- self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
- self.is_shortcut = True
- else:
- self.is_shortcut = False
-
- def forward(self, x):
- if self.is_shortcut:
- return self.conv(x) + self.shortcut(x)
- else:
- return self.conv(x) + x
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- in_channels,
- in_size,
- n_encoders,
- kernel_size,
- n_blocks,
- out_channels=16,
- momentum=0.01,
- ):
- super(Encoder, self).__init__()
- self.n_encoders = n_encoders
- self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
- self.layers = nn.ModuleList()
- self.latent_channels = []
- for i in range(self.n_encoders):
- self.layers.append(
- ResEncoderBlock(
- in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
- )
- )
- self.latent_channels.append([out_channels, in_size])
- in_channels = out_channels
- out_channels *= 2
- in_size //= 2
- self.out_size = in_size
- self.out_channel = out_channels
-
- def forward(self, x):
- concat_tensors = []
- x = self.bn(x)
- for i in range(self.n_encoders):
- _, x = self.layers[i](x)
- concat_tensors.append(_)
- return x, concat_tensors
-
-
-class ResEncoderBlock(nn.Module):
- def __init__(
- self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
- ):
- super(ResEncoderBlock, self).__init__()
- self.n_blocks = n_blocks
- self.conv = nn.ModuleList()
- self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
- self.kernel_size = kernel_size
- if self.kernel_size is not None:
- self.pool = nn.AvgPool2d(kernel_size=kernel_size)
-
- def forward(self, x):
- for i in range(self.n_blocks):
- x = self.conv[i](x)
- if self.kernel_size is not None:
- return x, self.pool(x)
- else:
- return x
-
-
-class Intermediate(nn.Module): #
- def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
- super(Intermediate, self).__init__()
- self.n_inters = n_inters
- self.layers = nn.ModuleList()
- self.layers.append(
- ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
- )
- for i in range(self.n_inters - 1):
- self.layers.append(
- ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
- )
-
- def forward(self, x):
- for i in range(self.n_inters):
- x = self.layers[i](x)
- return x
-
-
-class ResDecoderBlock(nn.Module):
- def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
- super(ResDecoderBlock, self).__init__()
- out_padding = (0, 1) if stride == (1, 2) else (1, 1)
- self.n_blocks = n_blocks
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=stride,
- padding=(1, 1),
- output_padding=out_padding,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- self.conv2 = nn.ModuleList()
- self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
-
- def forward(self, x, concat_tensor):
- x = self.conv1(x)
- x = torch.cat((x, concat_tensor), dim=1)
- for i in range(self.n_blocks):
- x = self.conv2[i](x)
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
- super(Decoder, self).__init__()
- self.layers = nn.ModuleList()
- self.n_decoders = n_decoders
- for i in range(self.n_decoders):
- out_channels = in_channels // 2
- self.layers.append(
- ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
- )
- in_channels = out_channels
-
- def forward(self, x, concat_tensors):
- for i in range(self.n_decoders):
- x = self.layers[i](x, concat_tensors[-1 - i])
- return x
-
-
-class DeepUnet(nn.Module):
- def __init__(
- self,
- kernel_size,
- n_blocks,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(DeepUnet, self).__init__()
- self.encoder = Encoder(
- in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
- )
- self.intermediate = Intermediate(
- self.encoder.out_channel // 2,
- self.encoder.out_channel,
- inter_layers,
- n_blocks,
- )
- self.decoder = Decoder(
- self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
- )
-
- def forward(self, x):
- x, concat_tensors = self.encoder(x)
- x = self.intermediate(x)
- x = self.decoder(x, concat_tensors)
- return x
-
-
-class E2E(nn.Module):
- def __init__(
- self,
- n_blocks,
- n_gru,
- kernel_size,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(E2E, self).__init__()
- self.unet = DeepUnet(
- kernel_size,
- n_blocks,
- en_de_layers,
- inter_layers,
- in_channels,
- en_out_channels,
- )
- self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
- if n_gru:
- self.fc = nn.Sequential(
- BiGRU(3 * 128, 256, n_gru),
- nn.Linear(512, 360),
- nn.Dropout(0.25),
- nn.Sigmoid(),
- )
- else:
- self.fc = nn.Sequential(
- nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
- )
-
- def forward(self, mel):
- # print(mel.shape)
- mel = mel.transpose(-1, -2).unsqueeze(1)
- x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
- x = self.fc(x)
- # print(x.shape)
- return x
-
-
-from librosa.filters import mel
-
-
-class MelSpectrogram(torch.nn.Module):
- def __init__(
- self,
- is_half,
- n_mel_channels,
- sampling_rate,
- win_length,
- hop_length,
- n_fft=None,
- mel_fmin=0,
- mel_fmax=None,
- clamp=1e-5,
- ):
- super().__init__()
- n_fft = win_length if n_fft is None else n_fft
- self.hann_window = {}
- mel_basis = mel(
- sr=sampling_rate,
- n_fft=n_fft,
- n_mels=n_mel_channels,
- fmin=mel_fmin,
- fmax=mel_fmax,
- htk=True,
- )
- mel_basis = torch.from_numpy(mel_basis).float()
- self.register_buffer("mel_basis", mel_basis)
- self.n_fft = win_length if n_fft is None else n_fft
- self.hop_length = hop_length
- self.win_length = win_length
- self.sampling_rate = sampling_rate
- self.n_mel_channels = n_mel_channels
- self.clamp = clamp
- self.is_half = is_half
-
- def forward(self, audio, keyshift=0, speed=1, center=True):
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(self.n_fft * factor))
- win_length_new = int(np.round(self.win_length * factor))
- hop_length_new = int(np.round(self.hop_length * speed))
- keyshift_key = str(keyshift) + "_" + str(audio.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
- # "cpu"if(audio.device.type=="privateuseone") else audio.device
- audio.device
- )
- # fft = torch.stft(#doesn't support pytorch_dml
- # # audio.cpu() if(audio.device.type=="privateuseone")else audio,
- # audio,
- # n_fft=n_fft_new,
- # hop_length=hop_length_new,
- # win_length=win_length_new,
- # window=self.hann_window[keyshift_key],
- # center=center,
- # return_complex=True,
- # )
- # magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
- # print(1111111111)
- # print(222222222222222,audio.device,self.is_half)
- if hasattr(self, "stft") == False:
- # print(n_fft_new,hop_length_new,win_length_new,audio.shape)
- self.stft = STFT(
- filter_length=n_fft_new,
- hop_length=hop_length_new,
- win_length=win_length_new,
- window="hann",
- ).to(audio.device)
- magnitude = self.stft.transform(audio) # phase
- # if (audio.device.type == "privateuseone"):
- # magnitude=magnitude.to(audio.device)
- if keyshift != 0:
- size = self.n_fft // 2 + 1
- resize = magnitude.size(1)
- if resize < size:
- magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
- magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
- mel_output = torch.matmul(self.mel_basis, magnitude)
- if self.is_half == True:
- mel_output = mel_output.half()
- log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
- # print(log_mel_spec.device.type)
- return log_mel_spec
-
-
-class RMVPE:
- def __init__(self, model_path, is_half, device=None):
- self.resample_kernel = {}
- self.resample_kernel = {}
- self.is_half = is_half
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.device = device
- self.mel_extractor = MelSpectrogram(
- is_half, 128, 16000, 1024, 160, None, 30, 8000
- ).to(device)
- if "privateuseone" in str(device):
- import onnxruntime as ort
-
- ort_session = ort.InferenceSession(
- "%s/rmvpe.onnx" % os.environ["rmvpe_root"],
- providers=["DmlExecutionProvider"],
- )
- self.model = ort_session
- else:
- model = E2E(4, 1, (2, 2))
- ckpt = torch.load(model_path, map_location="cpu")
- model.load_state_dict(ckpt)
- model.eval()
- if is_half == True:
- model = model.half()
- self.model = model
- self.model = self.model.to(device)
- cents_mapping = 20 * np.arange(360) + 1997.3794084376191
- self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
-
- def mel2hidden(self, mel):
- with torch.no_grad():
- n_frames = mel.shape[-1]
- mel = F.pad(
- mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="constant"
- )
- if "privateuseone" in str(self.device):
- onnx_input_name = self.model.get_inputs()[0].name
- onnx_outputs_names = self.model.get_outputs()[0].name
- hidden = self.model.run(
- [onnx_outputs_names],
- input_feed={onnx_input_name: mel.cpu().numpy()},
- )[0]
- else:
- hidden = self.model(mel)
- return hidden[:, :n_frames]
-
- def decode(self, hidden, thred=0.03):
- cents_pred = self.to_local_average_cents(hidden, thred=thred)
- f0 = 10 * (2 ** (cents_pred / 1200))
- f0[f0 == 10] = 0
- # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
- return f0
-
- def infer_from_audio(self, audio, thred=0.03):
- # torch.cuda.synchronize()
- t0 = ttime()
- mel = self.mel_extractor(
- torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True
- )
- # print(123123123,mel.device.type)
- # torch.cuda.synchronize()
- t1 = ttime()
- hidden = self.mel2hidden(mel)
- # torch.cuda.synchronize()
- t2 = ttime()
- # print(234234,hidden.device.type)
- if "privateuseone" not in str(self.device):
- hidden = hidden.squeeze(0).cpu().numpy()
- else:
- hidden = hidden[0]
- if self.is_half == True:
- hidden = hidden.astype("float32")
-
- f0 = self.decode(hidden, thred=thred)
- # torch.cuda.synchronize()
- t3 = ttime()
- # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
- return f0
-
- def infer_from_audio_with_pitch(self, audio, thred=0.03, f0_min=50, f0_max=1100):
- audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0)
- mel = self.mel_extractor(audio, center=True)
- hidden = self.mel2hidden(mel)
- hidden = hidden.squeeze(0).cpu().numpy()
- if self.is_half == True:
- hidden = hidden.astype("float32")
- f0 = self.decode(hidden, thred=thred)
- f0[(f0 < f0_min) | (f0 > f0_max)] = 0
- return f0
-
- def to_local_average_cents(self, salience, thred=0.05):
- # t0 = ttime()
- center = np.argmax(salience, axis=1) # 帧长#index
- salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368
- # t1 = ttime()
- center += 4
- todo_salience = []
- todo_cents_mapping = []
- starts = center - 4
- ends = center + 5
- for idx in range(salience.shape[0]):
- todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
- todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
- # t2 = ttime()
- todo_salience = np.array(todo_salience) # 帧长,9
- todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9
- product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
- weight_sum = np.sum(todo_salience, 1) # 帧长
- devided = product_sum / weight_sum # 帧长
- # t3 = ttime()
- maxx = np.max(salience, axis=1) # 帧长
- devided[maxx <= thred] = 0
- # t4 = ttime()
- # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
- return devided
-
-
-if __name__ == "__main__":
- import librosa
- import soundfile as sf
-
- audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav")
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- audio_bak = audio.copy()
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt"
- thred = 0.03 # 0.01
- device = "cuda" if torch.cuda.is_available() else "cpu"
- rmvpe = RMVPE(model_path, is_half=False, device=device)
- t0 = ttime()
- f0 = rmvpe.infer_from_audio(audio, thred=thred)
- # f0 = rmvpe.infer_from_audio(audio, thred=thred)
- # f0 = rmvpe.infer_from_audio(audio, thred=thred)
- # f0 = rmvpe.infer_from_audio(audio, thred=thred)
- # f0 = rmvpe.infer_from_audio(audio, thred=thred)
- t1 = ttime()
- logger.info("%s %.2f", f0.shape, t1 - t0)
diff --git a/spaces/radames/Gradio-llama2.mojo/Dockerfile b/spaces/radames/Gradio-llama2.mojo/Dockerfile
deleted file mode 100644
index 0d7058c5bcfa9dd2678e0a9d6f4e7a3680ae743e..0000000000000000000000000000000000000000
--- a/spaces/radames/Gradio-llama2.mojo/Dockerfile
+++ /dev/null
@@ -1,73 +0,0 @@
-# https://github.com/modularml/mojo/blob/main/examples/docker/Dockerfile.mojosdk
-# ===----------------------------------------------------------------------=== #
-# Copyright (c) 2023, Modular Inc. All rights reserved.
-#
-# Licensed under the Apache License v2.0 with LLVM Exceptions:
-# https://llvm.org/LICENSE.txt
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ===----------------------------------------------------------------------=== #
-
-# Example command line:
-# Use no-cache to force docker to rebuild layers of the image by downloading the SDK from the repos
-# docker build --no-cache \
-# --build-arg AUTH_KEY=
-# --pull -t modular/mojo-v0.2-`date '+%Y%d%m-%H%M'` \
-# --file Dockerfile.mojosdk .
-
-FROM ubuntu:20.04
-
-ARG DEFAULT_TZ=America/Los_Angeles
-ENV DEFAULT_TZ=$DEFAULT_TZ
-ARG MODULAR_HOME=/home/user/.modular
-ENV MODULAR_HOME=$MODULAR_HOME
-
-RUN apt-get update \
- && DEBIAN_FRONTEND=noninteractive TZ=$DEFAULT_TZ apt-get install -y \
- tzdata \
- vim \
- sudo \
- curl \
- git \
- wget && \
- rm -rf /var/lib/apt/lists/*
-
-RUN curl -fsSL https://repo.anaconda.com/miniconda/Miniconda3-py38_23.5.2-0-Linux-x86_64.sh > /tmp/miniconda.sh \
- && chmod +x /tmp/miniconda.sh \
- && /tmp/miniconda.sh -b -p /opt/conda
-
-ENV PATH=/opt/conda/bin:$PATH
-RUN conda init
-RUN pip install \
- jupyterlab \
- ipykernel \
- matplotlib \
- ipywidgets \
- gradio
-
-RUN --mount=type=secret,id=MODULAR_AUTH,mode=0444,required=true \
- curl https://get.modular.com | sh - \
- && modular auth $(cat /run/secrets/MODULAR_AUTH) \
- && modular install mojo
-
-RUN useradd -m -u 1000 user
-RUN chown -R user $MODULAR_HOME
-
-ENV PATH="$PATH:/opt/conda/bin:$MODULAR_HOME/pkg/packages.modular.com_mojo/bin"
-
-USER user
-WORKDIR $HOME/app
-
-COPY --chown=user . $HOME/app
-RUN wget -c -nv https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin
-RUN wget -c -nv https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin
-RUN wget -c -nv https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin
-RUN wget -c -nv https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-bin/resolve/main/tok_tl-chat.bin
-RUN wget -c -nv https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-bin/resolve/main/tl-chat.bin
-
-# CMD ["mojo", "llama2.mojo"]
-CMD ["python3", "gradio_app.py"]
\ No newline at end of file
diff --git a/spaces/radames/Jupyter-Kernel-Gateway-Flask/main.py b/spaces/radames/Jupyter-Kernel-Gateway-Flask/main.py
deleted file mode 100644
index 33aa1be5f959500c31cf47bf1efa1d1ebfa9451d..0000000000000000000000000000000000000000
--- a/spaces/radames/Jupyter-Kernel-Gateway-Flask/main.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import subprocess
-
-subprocess.Popen(
- "python -m ipykernel install --user --name=python3 --display-name='Python 3'", shell=True).communicate()
-subprocess.Popen("python -m jupyter kernelgateway --KernelGatewayApp.base_url='/' --ip=0.0.0.0 --KernelGatewayApp.allow_origin='*' --KernelGatewayApp.allow_credentials='*' --KernelGatewayApp.allow_headers='*' --KernelGatewayApp.allow_methods='*' --KernelGatewayApp.api=kernel_gateway.notebook_http --KernelGatewayApp.seed_uri=./index.ipynb --port=10000", shell=True)
-# subprocess.Popen("FLASK_APP=server.py FLASK_DEBUG=1 FLASK_ENV=development flask run --port 7860", shell=True).communicate()
-subprocess.Popen("python server.py", shell=True).communicate()
diff --git a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/nodes/2.97291494.js b/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/nodes/2.97291494.js
deleted file mode 100644
index 03c250c53dfc0ef90f59091199670e3bb13348a5..0000000000000000000000000000000000000000
--- a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/nodes/2.97291494.js
+++ /dev/null
@@ -1 +0,0 @@
-import{_ as P}from"../chunks/preload-helper.a4192956.js";import{s as j,n as k,o as L}from"../chunks/scheduler.e108d1fd.js";import{S as M,i as N,g,s as b,h as x,j as S,y as C,c as v,f as y,k as p,a as w,x as f,z as I,m as O,n as R,o as T}from"../chunks/index.7e6319f2.js";function E(l){let e,s=(!l[0]||!l[1]?"Loading...":JSON.stringify(l[1],null,2))+"",u;return{c(){e=g("pre"),u=O(s),this.h()},l(t){e=x(t,"PRE",{class:!0});var a=S(e);u=R(a,s),a.forEach(y),this.h()},h(){p(e,"class","bg-gray-100 p-2 rounded")},m(t,a){w(t,e,a),f(e,u)},p(t,a){a&3&&s!==(s=(!t[0]||!t[1]?"Loading...":JSON.stringify(t[1],null,2))+"")&&T(u,s)},d(t){t&&y(e)}}}function q(l){let e,s,u="Transformers.js",t,a,d="SvelteKit Static template (client-side)",m,i,o,h,_,r=l[0]!==null&&E(l);return{c(){e=g("main"),s=g("h1"),s.textContent=u,t=b(),a=g("h2"),a.textContent=d,m=b(),i=g("input"),o=b(),r&&r.c(),this.h()},l(c){e=x(c,"MAIN",{class:!0});var n=S(e);s=x(n,"H1",{class:!0,"data-svelte-h":!0}),C(s)!=="svelte-1e9bq3j"&&(s.textContent=u),t=v(n),a=x(n,"H2",{class:!0,"data-svelte-h":!0}),C(a)!=="svelte-1r4npkn"&&(a.textContent=d),m=v(n),i=x(n,"INPUT",{type:!0,class:!0,placeholder:!0}),o=v(n),r&&r.l(n),n.forEach(y),this.h()},h(){p(s,"class","text-5xl font-bold mb-2 text-center"),p(a,"class","text-2xl mb-4 text-center"),p(i,"type","text"),p(i,"class","w-full max-w-xs p-2 border border-gray-300 rounded mb-4"),p(i,"placeholder","Enter text here"),p(e,"class","flex min-h-screen flex-col items-center justify-center p-12")},m(c,n){w(c,e,n),f(e,s),f(e,t),f(e,a),f(e,m),f(e,i),f(e,o),r&&r.m(e,null),h||(_=I(i,"input",l[4]),h=!0)},p(c,[n]){c[0]!==null?r?r.p(c,n):(r=E(c),r.c(),r.m(e,null)):r&&(r.d(1),r=null)},i:k,o:k,d(c){c&&y(e),r&&r.d(),h=!1,_()}}}function A(l,e,s){let t,a=null,d=null;L(async()=>{if(!t){const o=await P(()=>import("../chunks/worker.359c8cc9.js"),[],import.meta.url);t=new o.default;const h=_=>{switch(_.data.status){case"initiate":s(0,d=!1);break;case"ready":s(0,d=!0);break;case"complete":s(1,a=_.data.output[0]);break}};t.addEventListener("message",h)}});function m(o){t&&t.postMessage({text:o})}const i=o=>{m(o.target.value)};return l.$$.update=()=>{l.$$.dirty&1&&console.log("ready",d)},[d,a,m,!0,i]}class $ extends M{constructor(e){super(),N(this,e,A,q,j,{prerender:3})}get prerender(){return this.$$.ctx[3]}}export{$ as component};
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/ABC Chemistry Book PDF Free Download of High-Quality and Up-to-Date Chemistry Resources.md b/spaces/raedeXanto/academic-chatgpt-beta/ABC Chemistry Book PDF Free Download of High-Quality and Up-to-Date Chemistry Resources.md
deleted file mode 100644
index c09d01adb77dd4bb7bdc3783c31d0e673db15fcc..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/ABC Chemistry Book PDF Free Download of High-Quality and Up-to-Date Chemistry Resources.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-Neighbours From Hell 3 Game Free Download Full 13
-Have you ever wanted to get back at your annoying neighbour who makes your life miserable? Have you ever dreamed of playing hilarious pranks on him and making him rage in frustration? If you answered yes to these questions, then you will love Neighbours From Hell 3 , a comedy puzzle stealth game that lets you unleash your inner mischief-maker.
-In this article, we will tell you everything you need to know about Neighbours From Hell 3 , including its story, gameplay, features, graphics, sound, reviews, ratings, and more. We will also show you how to download and install the game for free on your PC. So, without further ado, let's get started!
-Neighbours From Hell 3 Game Free Download Full 13 Download Zip ---> https://tinourl.com/2uKZeI
- The Story of Neighbours From Hell 3
-Neighbours From Hell 3 is the third installment in the popular Neighbours From Hell series, which started in 2003. The series follows the adventures of Woody, a TV show star who is constantly pranking his grumpy neighbour, Mr. Rottweiler (also known as "the neighbour from hell"). In each episode, Woody has to sneak into his neighbour's house or other locations and set up various traps and gags to make him angry and humiliate him in front of the camera.
-In Neighbours From Hell 3 , Woody and his neighbour are back for more mayhem. This time, Woody has to follow his neighbour around the world and prank him in different countries and cultures. From China to India, from Mexico to France, Woody will have to use his creativity and cunning to make his neighbour's life a living hell. But he will also have to deal with his neighbour's mother and other characters who will try to stop him or help him along the way.
- The Gameplay of Neighbours From Hell 3
-The gameplay of Neighbours From Hell 3 is similar to the previous games in the series. You control Woody from a third-person perspective and you have to explore each location and find objects that you can use to prank your neighbour. You can interact with various items and combine them to create funny situations. For example, you can replace a candle with a firework, or swap the shaving cream with whipped cream.
-Neighbours from Hell 3 download pc
-Neighbours from Hell 3 free games
-Neighbours from Hell 3 online harassment
-Neighbours from Hell 3 phantom beings
-Neighbours from Hell 3 download Now
-Neighbours from Hell 3 BDStudioGames
-Neighbours from Hell 3 ARealGamer
-Neighbours from Hell 3 GOG Games
-Neighbours from Hell 3 strategy puzzle
-Neighbours from Hell 3 revenge tricks
-Neighbours from Hell 3 TV show ratings
-Neighbours from Hell 3 watchful neighbours
-Neighbours from Hell 3 guard dogs
-Neighbours from Hell 3 beta game test
-Neighbours from Hell 3 privacy policy
-Neighbours from Hell 3 full version PC game
-Neighbours from Hell 3 compressed free download
-Neighbours from Hell 3 game info size version genre
-Neighbours from Hell 3 release date developer publisher
-Neighbours from Hell 3 description video game
-Neighbours from Hell 3 OS CPU requirements
-Neighbours from Hell 3 screenshots videos
-Neighbours from Hell 3 living next door
-Neighbours from Hell 3 creep around house
-Neighbours from Hell 3 fiendish traps objectives
-Neighbours from Hell 3 disarray awards
-Neighbours from Hell 3 email newsletter sign-up
-Neighbours from Hell 3 game download links
-Neighbours from Hell 3 OrigaStock website
-Neighbours from Hell 3 online horror game fans friends
-Neighbours from Hell 3 real-life incidents harassment individuals globe
-Neighbours from Hell 3 narrative group people sharing home traits personalities good cruel players decision play glasses day realise experiencing game real life figure halt intimidation rid supernatural beings residing home games group pals bullied bunch jerks players assume roles residents house good-natured gregarious diverse personalities participants quickly discover actually playing game put end terrifying restore home habitability determine remain current home move somewhere compatible mobile platforms play whenever anywhere like easy learn challenging master steer clear hazardous situations arise random sleep danger aware surroundings report questionable people action engrossing narratives fun gameplay elements occupied long time secure environment openly discuss encounters repercussions people lately impacted harassment forms traumatic stress exposure real life sources empathy strength social media pages support game posting fresh updates providing player support Three Neighbors From Hell Third Season Of Neighbors From shows
-You have to be careful not to get caught by your neighbour or his companions, or they will beat you up and end your prank session. You also have to avoid being seen by other people who might alert your neighbour or ruin your plans. You have a limited amount of time to complete each prank before your neighbour leaves or changes his routine.
-You can watch your neighbour's reactions through a small window on the screen that shows what he sees. You can also see his mood meter that indicates how angry he is. The more pranks you pull off successfully, the higher your score will be. You can also earn bonus points by finding hidden items or performing special actions.
- The Features of Neighbours From Hell 3
-Neighbours From Hell 3 has many features that make it fun and challenging. Some of them are:
-
-A total of 25 episodes set in different locations around the world, each with its own theme and atmosphere.
-A variety of pranks that range from simple to complex, from harmless to harmful, from silly to clever.
-A dynamic AI system that makes your neighbour react differently depending on his mood, his actions, and your pranks.
-A stealth mechanic that requires you to hide behind objects, sneak around corners, or distract your neighbour with noises or other items.
-A point-and-click interface that is easy to use and intuitive.
-A cartoon-style graphics that are colorful and expressive.
-A humorous sound design that features funny voice acting, sound effects, and music.
-
- The Graphics and Sound of Neighbours From Hell 3
-Neighbours From Hell 3 has a graphics style that is similar to the previous games in the series. It uses a cartoon-like aesthetic that is bright and cheerful. The characters are exaggerated and expressive, while the environments are detailed and lively. The animations are smooth and fluid, especially when Woody performs his pranks or when his neighbour reacts to them.
-The sound design of Neighbours From Hell 3 is also similar to the previous games in the series. It uses a humorous tone that matches the gameplay. The voice acting is funny and exaggerated, while the sound effects are realistic and fitting. The music is catchy and upbeat, while also reflecting the mood and theme of each location.
- The HD Remaster of Neighbours From Hell 3
-Neighbours From Hell 3 is an HD remaster of the original game that was released in 2005 for PC. It has been remastered in 2020 for modern devices, such as Windows 10, Android, iOS, and Nintendo Switch. The HD remaster of Neighbours From Hell 3 has improved the graphics and sound quality of the game, making it more enjoyable and immersive. The game has also been optimized for touch controls and widescreen displays.
-The HD remaster of Neighbours From Hell 3 includes both Neighbours From Hell 1 and Neighbours From Hell 2 , which are also remastered in full HD. You can play the classic episodes from the original games, as well as the new episodes from Neighbours From Hell 3 . You can also switch between the old school retro mode and the HD mode from the game's installation folder.
- The Comparison of Neighbours From Hell 3 with Previous Games
-Neighbours From Hell 3 is similar to the previous games in the series in terms of gameplay and style. However, it also has some differences and improvements that make it stand out. Some of them are:
-
-More locations and scenarios : Neighbours From Hell 3 takes you to different countries and cultures, where you can prank your neighbour in various ways. You can also prank him on a cruise ship, a train, a plane, and more.
-More characters and interactions : Neighbours From Hell 3 introduces new characters that can help or hinder your pranks. You can also interact with them in different ways, such as bribing them, distracting them, or pranking them as well.
-More items and combinations : Neighbours From Hell 3 gives you more items to use and combine for your pranks. You can also find hidden items that can unlock special pranks or bonus points.
-More challenges and rewards : Neighbours From Hell 3 has more levels of difficulty and complexity for your pranks. You can also earn more awards and achievements for your performance.
-
- The Reviews and Ratings of Neighbours From Hell 3
-Neighbours From Hell 3 has received mostly positive reviews and ratings from critics and players alike. The game has been praised for its humor, creativity, nostalgia, and entertainment value. The game has also been criticized for its repetitiveness, simplicity, bugs, and glitches.
- The Positive Reviews of Neighbours From Hell 3
-Most people who have played Neighbours From Hell 3 have enjoyed the game and found it fun and amusing. They have liked the game's graphics, sound, gameplay, features, and content. They have also appreciated the game's HD remaster and its inclusion of the previous games in the series.
- The Quotes from Positive Reviews of Neighbours From Hell 3
-Here are some quotes from positive reviews of Neighbours From Hell 3 :
-
-"This game is hilarious! I love pranking my neighbour and watching him go crazy. The graphics are nice and colorful, and the sound is funny and fitting. The game is easy to play and hard to master. I recommend it to anyone who likes comedy and puzzle games."
-- A user review on Steam
-
-
-"I grew up playing Neighbours From Hell 1 and 2 on my PC. I was so happy when I saw that they made a third one with new episodes and locations. The game is still as fun and addictive as ever. The HD remaster is also great, it makes the game look better and smoother. I love this game!"
-- A user review on Epic Games Store
-
-
-"This game is a gem! It's so creative and hilarious. I love how you can prank your neighbour in different ways and see his reactions. The game is also challenging and rewarding. It has a lot of content and variety. It's a great game for anyone who likes humor and stealth."
-- A user review on Google Play
-
- The Negative Reviews of Neighbours From Hell 3
-Some people who have played Neighbours From Hell 3 have disliked the game and found it boring and repetitive. They have disliked the game's graphics, sound, gameplay, features, and content. They have also complained about the game's bugs, glitches, crashes, and compatibility issues.
- The Quotes from Negative Reviews of Neighbours From Hell 3
-Here are some quotes from negative reviews of Neighbours From Hell 3 :
-
-"This game is boring and childish. The graphics are outdated and ugly, and the sound is annoying and repetitive. The gameplay is simple and easy, and the features are limited and dull. The content is also short and unoriginal. This game is a waste of money and time."
-- A user review on Metacritic
-
-
-"This game is a disaster. The graphics are pixelated and blurry, and the sound is low and distorted. The gameplay is buggy and glitchy, and the features are broken and missing. The content is also incomplete and inconsistent. The game crashes constantly and doesn't work on my device."
-- A user review on Google Play
-
-
-"This game is a disappointment. The graphics are mediocre and bland, and the sound is generic and boring. The gameplay is tedious and frustrating, and the features are lacking and unpolished. The content is also old and recycled. This game is a downgrade from the previous games in the series."
-- A user review on Steam
-
- The Conclusion of Neighbours From Hell 3
-Neighbours From Hell 3 is a comedy puzzle stealth game that lets you prank your neighbour in different locations around the world. The game has a cartoon-style graphics, a humorous sound design, a point-and-click interface, a dynamic AI system, a stealth mechanic, a variety of pranks, a total of 25 episodes, and an HD remaster that includes the previous games in the series.
-If you are looking for a fun and amusing game that will make you laugh and challenge your creativity and cunning, then you should give Neighbours From Hell 3 a try. You can download and install the game for free on your PC by following these steps:
-
-Click on this link to go to the download page: [link]
-Select your preferred language and click on "Download".
-Wait for the download to finish and then open the file.
-Follow the instructions to install the game on your PC.
-Enjoy pranking your neighbour!
-
- The FAQs about Neighbours From Hell 3
-Here are some common questions and answers about Neighbours From Hell 3 :
-
-Q: What are the system requirements for Neighbours From Hell 3?
-A: The minimum system requirements for Neighbours From Hell 3 are: Windows 10, 64-bit processor, 4GB RAM, Intel HD 4000 or equivalent graphics card, DirectX 11, 4GB disc space, DirectX compatible sound card.
-Q: How long is Neighbours From Hell 3?
-A: The average playtime for Neighbours From Hell 3 is about 10 hours.
-Q: Is Neighbours From Hell 3 multiplayer?
-A: No, Neighbours From Hell 3 is a single-player game.
-Q: Is Neighbours From Hell 3 suitable for children?
-A: No, Neighbours From Hell 3 is rated T for Teen by ESRB. It contains crude humor, violence, language, suggestive themes, alcohol reference, and tobacco reference.
-Q: Where can I find more information about Neighbours From Hell 3?
-A: You can find more information about Neighbours From Hell 3 on its official website: [link]
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Boris Fx Title Toolkit 1.0 Serial Number S.T.E.M. Curiosity[3].md b/spaces/raedeXanto/academic-chatgpt-beta/Boris Fx Title Toolkit 1.0 Serial Number S.T.E.M. Curiosity[3].md
deleted file mode 100644
index b0b9102782899b1a8b2cda3fad6c1425a23e4230..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Boris Fx Title Toolkit 1.0 Serial Number S.T.E.M. Curiosity[3].md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-Boris FX Title Toolkit 1.0 Serial Number: A Guide for After Effects Users
- If you are looking for a powerful and versatile plugin to create stunning titles for your video projects in After Effects, you might want to check out Boris FX Title Toolkit 1.0. This plugin is a collection of tools that allow you to design, animate, and apply effects to titles in a fast and easy way. In this article, we will show you what Boris FX Title Toolkit 1.0 is, how to install and activate it, how to use it in After Effects, and some tips and tricks for using it effectively.
-Boris Fx Title Toolkit 1.0 Serial Numberl Download Zip ○○○ https://tinourl.com/2uL3lE
- What is Boris FX Title Toolkit 1.0?
- Boris FX Title Toolkit 1.0 is a title effects plugin for After Effects CC that was released in 2022 by Boris FX, a leading developer of visual effects software for film, TV, and video editing. The plugin is compatible with both Windows and Mac operating systems and requires After Effects CC 2022 or later.
- The plugin offers a range of features that make title creation easy and fun, such as:
-
-A user-friendly interface that integrates seamlessly with After Effects
-A set of four tools: Title Container, Title Crawl, Title Text, and Title Style
-A library of over 200 presets and templates for different types of titles
-A collection of over 100 filters and transitions for adding effects to titles
-An option to import and export titles as vector graphics or text files
-An option to apply motion blur, 3D extrusion, and other advanced features to titles
-
- To use Boris FX Title Toolkit 1.0, you need to have a valid serial number that you can obtain from the Boris FX website or from an authorized reseller. The serial number is a unique code that identifies your license and allows you to activate the plugin on your computer.
- How to install and activate Boris FX Title Toolkit 1.0
- To install Boris FX Title Toolkit 1.0 on your computer, you need to follow these steps:
-
-Download the installer file from the Boris FX website or from an authorized reseller
-Run the installer file and follow the instructions on the screen
-Select the option to install Boris FX Title Toolkit 1.0 for After Effects CC
-Enter your serial number when prompted
-Restart After Effects CC after the installation is complete
-
- To activate Boris FX Title Toolkit 1.0 on your computer, you need to follow these steps:
-
-Launch After Effects CC and create a new project or open an existing one
-Go to the Effects menu and select Boris FX > Title Toolkit > Activate License
-Enter your serial number again when prompted
-Click OK to confirm the activation
-
- You can also deactivate your license if you want to transfer it to another computer or if you want to uninstall the plugin. To do so, you need to follow these steps:
-
-Launch After Effects CC and open a project that uses Boris FX Title Toolkit 1.0
-Go to the Effects menu and select Boris FX > Title Toolkit > Deactivate License
-Click OK to confirm the deactivation
-
- How to use Boris FX Title Toolkit 1.0 in After Effects
- To use Boris FX Title Toolkit 1.0 in After Effects, you need to apply one of its four tools to a layer in your composition: Title Container, Title Crawl, Title Text, or Title Style.
- The main components of the plugin: Title Container, Title Crawl, Title Text, and Title Style
- Title Container is a tool that allows you to create a container layer that holds one or more title layers inside it. You can use this tool to group multiple titles together and apply effects or animations to them as a whole.
- Title Crawl is a tool that allows you to create a title layer that scrolls horizontally or vertically across the screen. You can use this tool to create credits, news tickers, or other types of scrolling titles.
-Boris Fx Title Toolkit 1.0 activation code
-Boris Fx Title Toolkit 1.0 crack download
-Boris Fx Title Toolkit 1.0 license key generator
-Boris Fx Title Toolkit 1.0 product key free
-Boris Fx Title Toolkit 1.0 registration code
-Boris Fx Title Toolkit 1.0 serial number crack
-Boris Fx Title Toolkit 1.0 keygen online
-Boris Fx Title Toolkit 1.0 full version free
-Boris Fx Title Toolkit 1.0 patch file
-Boris Fx Title Toolkit 1.0 torrent link
-How to install Boris Fx Title Toolkit 1.0
-How to use Boris Fx Title Toolkit 1.0
-How to get Boris Fx Title Toolkit 1.0 for free
-How to activate Boris Fx Title Toolkit 1.0
-How to crack Boris Fx Title Toolkit 1.0
-Boris Fx Title Toolkit 1.0 review and rating
-Boris Fx Title Toolkit 1.0 features and benefits
-Boris Fx Title Toolkit 1.0 tutorial and guide
-Boris Fx Title Toolkit 1.0 support and help
-Boris Fx Title Toolkit 1.0 alternatives and competitors
-Boris Fx Title Toolkit 1.0 discount and coupon code
-Boris Fx Title Toolkit 1.0 price and cost
-Boris Fx Title Toolkit 1.0 system requirements and compatibility
-Boris Fx Title Toolkit 1.0 update and upgrade
-Boris Fx Title Toolkit 1.0 refund and warranty policy
-Best settings for Boris Fx Title Toolkit 1.0
-Best plugins for Boris Fx Title Toolkit 1.0
-Best templates for Boris Fx Title Toolkit 1.0
-Best fonts for Boris Fx Title Toolkit 1.0
-Best effects for Boris Fx Title Toolkit 1.0
-Best transitions for Boris Fx Title Toolkit 1.0
-Best presets for Boris Fx Title Toolkit 1.0
-Best tips and tricks for Boris Fx Title Toolkit 1.0
-Best practices and examples for Boris Fx Title Toolkit 1.0
-Best tutorials and courses for Boris Fx Title Toolkit 1.0
-Best blogs and forums for Boris Fx Title Toolkit 1.0
-Best books and ebooks for Boris Fx Title Toolkit 1.0
-Best videos and podcasts for Boris Fx Title Toolkit 1.0
-Best case studies and testimonials for Boris Fx Title Toolkit 1.0
-Best tools and resources for Boris Fx Title Toolkit 1.0
-Pros and cons of Boris Fx Title Toolkit 1.0
-Comparison of Boris Fx Title Toolkit 1.0 with other software
-FAQ about Boris Fx Title Toolkit 1.0
-Troubleshooting for Boris Fx Title Toolkit 1.0
-Error codes and messages for Boris Fx Title Toolkit 1.0
-Solutions and fixes for Boris Fx Title Toolkit 1.0 problems
-Tips to speed up and optimize Boris Fx Title Toolkit 1.0 performance
-Tips to secure and protect Boris Fx Title Toolkit 1.0 data
-Tips to customize and personalize Boris Fx Title Toolkit 1.0 interface
- Title Text is a tool that allows you to create a title layer that displays static text on the screen. You can use this tool to create headlines, subtitles, captions, or other types of text titles.
- Title Style is a tool that allows you to create a style layer that defines the appearance of one or more title layers. You can use this tool to change the font, color, size, alignment, shadow, glow, outline, or other attributes of your titles.
- How to create and customize titles using the plugin's parameters and presets
- To create a title using one of the plugin's tools, you need to follow these steps:
-
-Create a new solid layer in your composition by going to Layer > New > Solid
-Rename the layer as "Title" or something else that makes sense for your project
-Go to the Effects menu and select Boris FX > Title Toolkit > [Tool Name]
-In the Effect Controls panel, adjust the parameters of the tool according to your preferences
-In the Composition panel, preview your title by pressing spacebar or clicking on RAM Preview button
-
- To customize your title using one of the plugin's presets or templates, you need to follow these steps:
-
-Select your title layer in your composition In the Effect Controls panel, click on Load Preset button next to Preset parameter In the Load Preset dialog box, browse through the categories and subcategories of presets and templates and select one that suits your needs Click OK to apply the preset or template to your title layer In the Effect Controls panel, adjust any other parameters of the tool according to your preferences In the Composition panel, preview your title by pressing spacebar or clicking on RAM Preview button
- How to animate and apply effects to titles using the plugin's keyframes and filters
- To animate your title using one of the plugin's keyframes, you need to follow these steps:
-
-Select your title layer in your composition In the Effect Controls panel, find the parameter that controls the property you want to animate, such as Position, Scale, Rotation, Opacity, or any other parameter under Animation group Move the current-time indicator (CTI) to where you want the animation to start in the Timeline panel and click on the stopwatch icon next to the parameter name to set a keyframe at that point with the current value of the parameter Move the CTI to where you want the animation to end in the Timeline panel and change the value of the parameter to set another keyframe at that point with the new value of the parameter In between these two keyframes, you can add more keyframes by changing the value of the parameter at different points in time, or adjust the interpolation of keyframes by right-clicking on them and choosing Keyframe Interpolation from the context menu 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Cara Membuat Aplikasi Sistem Pakar dengan PHP dan Database MySQL.md b/spaces/raedeXanto/academic-chatgpt-beta/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Cara Membuat Aplikasi Sistem Pakar dengan PHP dan Database MySQL.md
deleted file mode 100644
index 5b1f0bb98a629c90ccaf996a836b740a19c6cffa..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Cara Membuat Aplikasi Sistem Pakar dengan PHP dan Database MySQL.md
+++ /dev/null
@@ -1,167 +0,0 @@
-
-Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1
-Apa itu sistem pakar? Sistem pakar adalah sebuah program komputer yang dapat meniru pengetahuan dan kemampuan seorang pakar dalam bidang tertentu. Sistem pakar dapat digunakan untuk memberikan solusi atau saran kepada pengguna yang membutuhkan bantuan dalam menyelesaikan masalah atau mengambil keputusan.
-Apa itu phpzip 1? Phpzip 1 adalah sebuah library PHP yang dapat digunakan untuk membuat dan membaca file zip. Phpzip 1 dapat memudahkan pengembang untuk mengelola file zip dalam aplikasi web mereka.
-Contoh Aplikasi Program Sistem Pakar Menggunakan Phpzip 1 Download ••• https://tinourl.com/2uL0u2
-Mengapa menggunakan sistem pakar dan phpzip 1 untuk membuat aplikasi sistem pakar? Salah satu keuntungan menggunakan sistem pakar dan phpzip 1 adalah kemudahan dalam pengembangan dan distribusi aplikasi sistem pakar. Dengan menggunakan sistem pakar dan phpzip 1, pengembang dapat membuat aplikasi sistem pakar yang berbasis web dengan cepat dan mudah. Selain itu, dengan menggunakan phpzip 1, pengembang dapat mengemas semua file yang dibutuhkan oleh aplikasi sistem pakar dalam satu file zip yang dapat diunduh dan dijalankan oleh pengguna tanpa perlu instalasi.
-Langkah-langkah Membuat Aplikasi Sistem Pakar
-Berikut ini adalah langkah-langkah umum yang dapat diikuti oleh pengembang untuk membuat aplikasi sistem pakar menggunakan sistem pakar dan phpzip 1:
-Menentukan Nama dan Tujuan Sistem Pakar
-Langkah pertama adalah menentukan nama dan tujuan dari aplikasi sistem pakar yang akan dibuat. Nama dan tujuan harus sesuai dengan bidang keahlian yang akan ditiru oleh sistem pakar. Contohnya, jika ingin membuat aplikasi sistem pakar untuk mendiagnosa penyakit kucing, maka nama dan tujuannya bisa seperti ini:
-
-Nama: Sistem Pakar Diagnosa Penyakit Kucing
-Tujuan: Membantu pengguna untuk mengetahui jenis penyakit kucing berdasarkan gejala-gejala yang dialami oleh kucingnya dan memberikan solusi untuk pengobatan dan perawatan kucingnya.
-
-Mengumpulkan Informasi dari Pakar
-Langkah kedua adalah mengumpulkan informasi dari pakar yang berkaitan dengan bidang keahlian yang akan ditiru oleh sistem pakar. Informasi ini bisa berupa fakta, aturan, variabel, atau lainnya yang dapat digunakan untuk membangun basis pengetahuan dan basis aturan dari sistem pakar. Informasi ini bisa didapatkan dari berbagai sumber seperti buku, jurnal, wawancara, atau internet.
-Contohnya, jika ingin membuat aplikasi sistem pakar untuk mendiagnosa penyakit kucing, maka informasi yang perlu dikumpulkan adalah jenis-jenis penyakit kucing beserta gejala-gejala, penyebab-penyebab, dan pengobatan-pengobatan yang terkait dengan masing-masing penyakit kucing.
-Membuat Basis Pengetahuan dan Basis Aturan
-Langkah ketiga adalah membuat basis pengetahuan dan basis aturan dari sistem pakar berdasarkan informasi yang telah dikumpulkan dari pakar. Basis pengetahuan adalah kumpulan fakta atau data yang berkaitan dengan bidang keahlian yang ditiru oleh sistem pakar. Basis aturan adalah kumpulan aturan atau logika yang digunakan oleh sistem pakar untuk melakukan inferensi atau penalaran berdasarkan basis pengetahuan.
-Contohnya, jika ingin membuat aplikasi sistem pakar untuk mendiagnosa penyakit kucing, maka basis pengetahuannya bisa berisi fakta-fakta seperti ini:
-
-Fakta 1: Penyakit A adalah penyakit kucing yang ditandai dengan gejala G1, G2, dan G3.
-Fakta 2: Penyakit B adalah penyakit kucing yang ditandai dengan gejala G1, G4, dan G5.
-Fakta 3: Penyakit C adalah penyakit kucing yang ditandai dengan gejala G1, G3, G5, dan G6.
-Fakta 4: Solusi untuk mengobati penyakit A adalah S1.
-Fakta 5: Solusi untuk mengobati penyakit B adalah S2.
-Fakta 6: Solusi untuk mengobati penyakit C adalah S3.
-
-Sedangkan basis aturannya bisa berisi aturan-aturan seperti ini:
-How to create a web application for expert systems using Php.zip 1
-Tutorial on building a system for evaluating and predicting using Php.zip 1
-Examples of expert systems for cocoa plant diseases using Php.zip 1
-Benefits of using Php.zip 1 for developing web-based expert systems
-Challenges and solutions for implementing expert systems with Php.zip 1
-Comparison of Php.zip 1 and other tools for creating expert systems
-Best practices and tips for using Php.zip 1 for expert systems
-Features and functions of Php.zip 1 for web application development
-Reviews and ratings of Php.zip 1 for expert systems
-Download and installation guide for Php.zip 1
-Frequently asked questions and answers about Php.zip 1
-Troubleshooting and debugging tips for Php.zip 1
-Case studies and success stories of using Php.zip 1 for expert systems
-Future trends and updates of Php.zip 1 for web applications
-Resources and references for learning more about Php.zip 1 and expert systems
-Advantages and disadvantages of using Php.zip 1 for expert systems
-Alternatives and competitors of Php.zip 1 for web application development
-Pricing and plans of Php.zip 1 for expert systems
-Customer support and feedback of Php.zip 1
-How to integrate Php.zip 1 with other applications and platforms
-How to optimize the performance and security of Php.zip 1
-How to customize and personalize Php.zip 1 for your needs
-How to test and evaluate the quality and accuracy of Php.zip 1
-How to use Php.zip 1 for different types of expert systems and domains
-How to collaborate and share your work with Php.zip 1
-How to migrate and upgrade your existing expert systems to Php.zip 1
-How to use Php.zip 1 for educational and research purposes
-How to use Php.zip 1 for commercial and professional purposes
-How to use Php.zip 1 for hobby and personal purposes
-How to use Php.zip 1 for social and environmental purposes
-How to use Php.zip 1 for creative and artistic purposes
-How to use Php.zip 1 for fun and entertainment purposes
-How to use Php.zip 1 for health and wellness purposes
-How to use Php.zip 1 for travel and tourism purposes
-How to use Php.zip 1 for sports and fitness purposes
-How to use Php.zip 1 for gaming and simulation purposes
-How to use Php.zip 1 for music and audio purposes
-How to use Php.zip 1 for video and animation purposes
-How to use Php.zip 1 for image and graphics purposes
-How to use Php.zip 1 for text and document purposes
-How to use Php.zip 1 for data and analytics purposes
-How to use Php.zip 1 for automation and robotics purposes
-How to use Php.zip 1 for artificial intelligence and machine learning purposes
-How to use Php.zip 1 for blockchain and cryptocurrency purposes
-How to use Php.zip 1 for internet of things and smart devices purposes
-How to use Php.zip 1 for augmented reality and virtual reality purposes
-How to use Php.zip 1 for cloud computing and web hosting purposes
-
-Aturan 1: Jika kucing memiliki gejala G1, G2, dan G3 maka kucing tersebut menderita penyakit A.
-Aturan 2: Jika kucing memiliki gejala G1, G4, dan G5 maka kucing tersebut menderita penyakit B.
-Aturan 3: Jika kucing memiliki gejala G1,G3,G5,dan G6 maka kucing tersebut menderita penyakit C.
-```html solusi yang diberikan adalah S1.
-Aturan 5: Jika kucing menderita penyakit B maka solusi yang diberikan adalah S2.
-Aturan 6: Jika kucing menderita penyakit C maka solusi yang diberikan adalah S3.
-
-Membuat Antarmuka Pengguna dan Mesin Inferensi
-Langkah keempat adalah membuat antarmuka pengguna dan mesin inferensi dari sistem pakar. Antarmuka pengguna adalah bagian dari sistem pakar yang berfungsi untuk berinteraksi dengan pengguna, seperti menerima input, menampilkan output, dan memberikan feedback. Mesin inferensi adalah bagian dari sistem pakar yang berfungsi untuk melakukan inferensi atau penalaran berdasarkan basis pengetahuan dan basis aturan.
-Contohnya, jika ingin membuat aplikasi sistem pakar untuk mendiagnosa penyakit kucing, maka antarmuka penggunanya bisa berisi halaman-halaman seperti ini:
-
-Halaman 1: Halaman awal yang menampilkan nama dan tujuan dari aplikasi sistem pakar serta tombol untuk memulai diagnosa.
-Halaman 2: Halaman yang menampilkan pertanyaan-pertanyaan tentang gejala-gejala yang dialami oleh kucing beserta pilihan jawaban ya atau tidak.
-Halaman 3: Halaman yang menampilkan hasil diagnosa berupa jenis penyakit kucing yang diderita beserta solusi untuk pengobatan dan perawatan kucing.
-
-Sedangkan mesin inferensinya bisa menggunakan metode forward chaining atau backward chaining untuk melakukan inferensi. Metode forward chaining adalah metode inferensi yang dimulai dari fakta-fakta yang diketahui dan mencari aturan-aturan yang sesuai untuk mencapai kesimpulan. Metode backward chaining adalah metode inferensi yang dimulai dari kesimpulan yang diinginkan dan mencari fakta-fakta yang mendukungnya.
-Melakukan Pengujian dan Evaluasi
-Langkah kelima adalah melakukan pengujian dan evaluasi dari aplikasi sistem pakar. Pengujian dan evaluasi bertujuan untuk menguji kebenaran, keandalan, dan kemanfaatan dari aplikasi sistem pakar. Pengujian dan evaluasi bisa dilakukan dengan cara-cara seperti ini:
-
-Melakukan uji coba dengan menggunakan data-data kasus nyata atau simulasi.
-Melakukan validasi dengan membandingkan hasil diagnosa dari aplikasi sistem pakar dengan hasil diagnosa dari pakar asli.
-Melakukan verifikasi dengan memeriksa kesesuaian antara basis pengetahuan dan basis aturan dengan informasi dari pakar.
-Melakukan uji kegunaan dengan mengukur tingkat kepuasan, kemudahan, dan kecepatan pengguna dalam menggunakan aplikasi sistem pakar.
-
-Contoh Kasus Aplikasi Sistem Pakar untuk Diagnosa Penyakit Kucing
-Berikut ini adalah contoh kasus aplikasi sistem pakar untuk mendiagnosa penyakit kucing menggunakan phpzip 1:
-Nama dan Tujuan Aplikasi Sistem Pakar untuk Diagnosa Penyakit Kucing
-Nama aplikasi sistem pakar untuk mendiagnosa penyakit kucing adalah "Sistem Pakar Diagnosa Penyakit Kucing". Tujuan aplikasi ini adalah membantu pengguna untuk mengetahui jenis penyakit kucing berdasarkan gejala-gejala yang dialami oleh kucingnya dan memberikan solusi untuk pengobatan dan perawatan kucingnya.
-Informasi dari Pakar tentang Penyakit Kucing
-Informasi dari pakar tentang penyakit kucing didapatkan dari sumber-sumber seperti buku, jurnal, atau internet. Berikut ini adalah beberapa contoh informasi dari pakar tentang penyakit kucing:
-
-Penyakit Kucing Gejala-Gejala Penyebab-Penyebab Pengobatan-Pengobatan
-Flu Kucing Bersin-bersin, hidung berair, mata merah, demam, nafsu makan menurun. Virus atau bakteri yang menular melalui kontak langsung atau udara. Memberikan antibiotik, obat penurun demam, cairan elektrolit, pembersih mata dan hidung.
-Cacingan Perut buncit, diare, muntah-muntah, bulu rontok, anemia. Cacing parasit yang masuk melalui mulut atau kulit saat kucing memakan makanan atau minuman yang tercemar cacing. Memberikan obat cacing sesuai jenis cacingnya, membersihkan lingkungan kandang atau tempat tinggal kucing.
-Rabies Gangguan saraf, agresif, gigit-gigit benda atau orang, air liur berbusa, lumpuh. Virus rabies yang menular melalui gigitan hewan lain yang terinfeksi rabies. Tidak ada pengobatan spesifik. Pencegahan dengan memberikan vaksin rabies kepada kucing secara rutin.
-
- Basis Pengetahuan dan Basis Aturan untuk Diagnosa Penyakit Kucing
-Basis pengetahuan dan basis aturan untuk diagnosa penyakit kucing dibuat berdasarkan informasi dari pakar tentang penyakit kucing. Berikut ini adalah beberapa contoh basis pengetahuan dan basis aturan untuk diagnosa penyakit kucing:
-
-Fakta 1: Flu Kucing adalah penyakit kucing yang ditandai dengan gejala bersin-bersin, hidung berair, mata merah, demam, nafsu makan menurun.
-Fakta 2: Cacingan adalah penyakit kucing yang ditandai dengan gejala perut buncit, diare, muntah-muntah, bulu rontok, anemia.
-Fakta 3: Rabies adalah penyakit kucing yang ditandai dengan gejala gangguan saraf, agresif, gigit-gigit benda atau orang, air liur berbusa, lumpuh.
- Fakta 4: Solusi untuk mengobati Flu Kucing adalah memberikan antibiotik, obat penurun demam, cairan elektrolit, pembersih mata dan hidung.
- Fakta 5: Solusi untuk mengobati Cacingan adalah memberikan obat cacing sesuai jenis cacingnya, membersihkan lingkungan kandang atau tempat tinggal kucing.
- Fakta 6: Solusi untuk mengobati Rabies adalah tidak ada pengobatan spesifik. Pencegahan dengan memberikan vaksin rabies kepada kucing secara rutin.
- Aturan 1: Jika kucing memiliki gejala bersin-bersin, hidung berair, mata merah, demam, nafsu makan menurun maka kucing tersebut menderita Flu Kucing.
- Aturan 2: Jika kucing memiliki gejala perut buncit, diare, muntah-muntah, bulu rontok, anemia maka kucing tersebut menderita Cacingan.
- Aturan 3: Jika kucing memiliki gejala gangguan saraf, agresif, gigit-gigit benda atau orang, air liur berbusa, lumpuh maka kucing tersebut menderita Rabies.
- Aturan 4: Jika kucing menderita Flu Kucing maka solusi yang diberikan adalah memberikan antibiotik, obat penurun demam, cairan elektrolit, pembersih mata dan hidung.
- Aturan 5: Jika kucing menderita Cacingan maka solusi yang diberikan adalah memberikan obat cacing sesuai jenis cacingnya, ```html Aturan 6: Jika kucing menderita Rabies maka solusi yang diberikan adalah tidak ada pengobatan spesifik. Pencegahan dengan memberikan vaksin rabies kepada kucing secara rutin.
-
-Antarmuka Pengguna dan Mesin Inferensi untuk Diagnosa Penyakit Kucing
-Antarmuka pengguna dan mesin inferensi untuk diagnosa penyakit kucing dibuat menggunakan bahasa pemrograman PHP dan library phpzip 1. Berikut ini adalah beberapa contoh antarmuka pengguna dan mesin inferensi untuk diagnosa penyakit kucing:
- Halaman 1: Halaman awal yang menampilkan nama dan tujuan dari aplikasi sistem pakar serta tombol untuk memulai diagnosa.
-
-Halaman 2: Halaman yang menampilkan pertanyaan-pertanyaan tentang gejala-gejala yang dialami oleh kucing beserta pilihan jawaban ya atau tidak.
-
-Halaman 3: Halaman yang menampilkan hasil diagnosa berupa jenis penyakit kucing yang diderita beserta solusi untuk pengobatan dan perawatan kucing.
-
-Mesin inferensi menggunakan metode forward chaining untuk melakukan inferensi. Berikut ini adalah pseudocode dari mesin inferensi:
-
-//Inisialisasi variabel facts = array of facts from knowledge base rules = array of rules from rule base input = array of user input output = empty array //Loop until output is not empty or input is empty while output is empty and input is not empty //Loop through each rule for each rule in rules //Check if the rule's condition matches the input if rule.condition matches input //Add the rule's conclusion to the output output.add(rule.conclusion) //Remove the rule from the rules rules.remove(rule) //Break the loop break end if end for //Remove the first element from the input input.remove(input[0]) end while //Display the output or a message if output is empty if output is not empty display output else display "Tidak dapat mendiagnosa penyakit kucing" end if
-Pengujian dan Evaluasi Aplikasi Sistem Pakar untuk Diagnosa Penyakit Kucing
-Pengujian dan evaluasi aplikasi sistem pakar untuk diagnosa penyakit kucing dilakukan dengan menggunakan data-data kasus nyata atau simulasi, validasi dengan pakar asli, verifikasi dengan informasi dari pakar, dan uji kegunaan dengan pengguna. Berikut ini adalah beberapa contoh pengujian dan evaluasi aplikasi sistem pakar untuk diagnosa penyakit kucing:
-
-Uji coba dengan data kasus nyata: Misalnya, ada kucing yang memiliki gejala bersin-bersin, hidung berair, mata merah, demam, dan nafsu makan menurun. Dengan memasukkan gejala-gejala tersebut ke dalam aplikasi sistem pakar, hasil diagnosanya adalah kucing tersebut menderita Flu Kucing dan solusinya adalah memberikan antibiotik, obat penurun demam, cairan elektrolit, pembersih mata dan hidung.
-Validasi dengan pakar asli: Misalnya, ada dokter hewan yang ahli dalam bidang penyakit kucing. Dengan memberikan data kasus nyata atau simulasi kepada dokter hewan tersebut dan meminta dia untuk mendiagnosa penyakit kucing dan memberikan solusinya, hasil diagnosanya harus sama atau setidaknya mirip dengan hasil diagnosa dari aplikasi sistem pakar.
-Verifikasi dengan informasi dari pakar: Misalnya, ada buku atau jurnal yang berisi informasi tentang penyakit kucing. Dengan membandingkan basis pengetahuan dan basis aturan dari aplikasi sistem pakar dengan informasi dari buku atau jurnal tersebut, basis pengetahuan dan basis aturan harus sesuai atau setidaknya tidak bertentangan dengan informasi dari buku atau jurnal tersebut.
-Uji kegunaan dengan pengguna: Misalnya, ada pengguna yang ingin mencoba aplikasi sistem pakar untuk mendiagnosa penyakit kucing. Dengan mengukur tingkat kepuasan, kemudahan, dan kecepatan pengguna dalam menggunakan aplikasi sistem pakar, aplikasi sistem pakar harus dapat memberikan kepuasan, kemudahan, dan kecepatan yang tinggi kepada pengguna.
-
-Kesimpulan dan Saran
-Dari penjelasan di atas, dapat disimpulkan bahwa aplikasi sistem pakar adalah sebuah program komputer yang dapat meniru pengetahuan dan kemampuan seorang pakar dalam bidang tertentu. Aplikasi sistem pakar dapat dibuat dengan menggunakan sistem pakar dan phpzip 1 sebagai alat bantu. Langkah-langkah membuat aplikasi sistem pakar adalah menentukan nama dan tujuan sistem pakar, mengumpulkan informasi dari pakar, membuat basis pengetahuan dan basis aturan, membuat antarmuka pengguna dan mesin inferensi, serta melakukan pengujian dan evaluasi. Contoh kasus aplikasi sistem pakar adalah aplikasi sistem pakar untuk mendiagnosa penyakit kucing.
-Berikut ini adalah beberapa saran untuk meningkatkan atau mengembangkan aplikasi sistem pakar:
-
-Meningkatkan jumlah dan variasi data kasus nyata atau simulasi untuk melakukan uji coba yang lebih lengkap dan akurat.
-Meningkatkan kerjasama dengan pakar asli untuk melakukan validasi yang lebih mendalam dan komprehensif.
-Meningkatkan referensi dari sumber-sumber informasi yang berkualitas dan terpercaya untuk melakukan verifikasi yang lebih valid dan reliabel.
-Meningkatkan fitur-fitur antarmuka pengguna yang lebih interaktif dan menarik untuk meningkatkan kepuasan, kemudahan, dan kecepatan pengguna.
-
- FAQs
-
-Apa itu sistem pakar? Sistem pakar adalah sebuah program komputer yang dapat meniru pengetahuan dan kemampuan seorang pakar dalam bidang tertentu.
-Apa itu phpzip 1? Phpzip 1 adalah sebuah library PHP yang dapat digunakan untuk membuat dan membaca file zip.
-Apa keuntungan menggunakan sistem pakar dan phpzip 1? Keuntungan menggunakan sistem pakar dan phpzip 1 adalah kemudahan dalam pengembangan dan distribusi aplikasi sistem pakar berbasis web.
-Apa saja langkah-langkah membuat aplikasi sistem pakar? Langkah-langkah membuat aplikasi sistem pakar adalah menentukan nama dan tujuan sistem pakar, mengumpulkan informasi dari pakar, membuat basis pengetahuan dan basis aturan, membuat antarmuka pengguna dan mesin inferensi, serta melakukan pengujian dan evaluasi.
-Apa saja contoh kasus aplikasi sistem pakar? Contoh kasus aplikasi sistem pakar adalah aplikasi sistem pakar untuk mendiagnosa penyakit kucing, aplikasi sistem pakar untuk mendiagnosa penyakit tanaman, aplikasi sistem pakar untuk mendiagnosa kerusakan komputer, dan lain-lain.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/web_selenium.py b/spaces/ramiin2/AutoGPT/autogpt/commands/web_selenium.py
deleted file mode 100644
index 11bdfeb1f1630fc6ff6f55d68e8d7233281c5098..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/commands/web_selenium.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""Selenium web scraping module."""
-from __future__ import annotations
-
-import logging
-from pathlib import Path
-from sys import platform
-
-from bs4 import BeautifulSoup
-from selenium import webdriver
-from selenium.webdriver.chrome.options import Options as ChromeOptions
-from selenium.webdriver.common.by import By
-from selenium.webdriver.firefox.options import Options as FirefoxOptions
-from selenium.webdriver.remote.webdriver import WebDriver
-from selenium.webdriver.safari.options import Options as SafariOptions
-from selenium.webdriver.support import expected_conditions as EC
-from selenium.webdriver.support.wait import WebDriverWait
-from webdriver_manager.chrome import ChromeDriverManager
-from webdriver_manager.firefox import GeckoDriverManager
-
-import autogpt.processing.text as summary
-from autogpt.config import Config
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-FILE_DIR = Path(__file__).parent.parent
-CFG = Config()
-
-
-def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
- """Browse a website and return the answer and links to the user
-
- Args:
- url (str): The url of the website to browse
- question (str): The question asked by the user
-
- Returns:
- Tuple[str, WebDriver]: The answer and links to the user and the webdriver
- """
- driver, text = scrape_text_with_selenium(url)
- add_header(driver)
- summary_text = summary.summarize_text(url, text, question, driver)
- links = scrape_links_with_selenium(driver, url)
-
- # Limit links to 5
- if len(links) > 5:
- links = links[:5]
- close_browser(driver)
- return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver
-
-
-def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
- """Scrape text from a website using selenium
-
- Args:
- url (str): The url of the website to scrape
-
- Returns:
- Tuple[WebDriver, str]: The webdriver and the text scraped from the website
- """
- logging.getLogger("selenium").setLevel(logging.CRITICAL)
-
- options_available = {
- "chrome": ChromeOptions,
- "safari": SafariOptions,
- "firefox": FirefoxOptions,
- }
-
- options = options_available[CFG.selenium_web_browser]()
- options.add_argument(
- "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36"
- )
-
- if CFG.selenium_web_browser == "firefox":
- driver = webdriver.Firefox(
- executable_path=GeckoDriverManager().install(), options=options
- )
- elif CFG.selenium_web_browser == "safari":
- # Requires a bit more setup on the users end
- # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
- driver = webdriver.Safari(options=options)
- else:
- if platform == "linux" or platform == "linux2":
- options.add_argument("--disable-dev-shm-usage")
- options.add_argument("--remote-debugging-port=9222")
-
- options.add_argument("--no-sandbox")
- if CFG.selenium_headless:
- options.add_argument("--headless")
- options.add_argument("--disable-gpu")
-
- driver = webdriver.Chrome(
- executable_path=ChromeDriverManager().install(), options=options
- )
- driver.get(url)
-
- WebDriverWait(driver, 10).until(
- EC.presence_of_element_located((By.TAG_NAME, "body"))
- )
-
- # Get the HTML content directly from the browser's DOM
- page_source = driver.execute_script("return document.body.outerHTML;")
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return driver, text
-
-
-def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]:
- """Scrape links from a website using selenium
-
- Args:
- driver (WebDriver): The webdriver to use to scrape the links
-
- Returns:
- List[str]: The links scraped from the website
- """
- page_source = driver.page_source
- soup = BeautifulSoup(page_source, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
-
- return format_hyperlinks(hyperlinks)
-
-
-def close_browser(driver: WebDriver) -> None:
- """Close the browser
-
- Args:
- driver (WebDriver): The webdriver to close
-
- Returns:
- None
- """
- driver.quit()
-
-
-def add_header(driver: WebDriver) -> None:
- """Add a header to the website
-
- Args:
- driver (WebDriver): The webdriver to use to add the header
-
- Returns:
- None
- """
- driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())
diff --git a/spaces/ramiin2/AutoGPT/autogpt/memory/no_memory.py b/spaces/ramiin2/AutoGPT/autogpt/memory/no_memory.py
deleted file mode 100644
index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/memory/no_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""A class that does not store any data. This is the default memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class NoMemory(MemoryProviderSingleton):
- """
- A class that does not store any data. This is the default memory provider.
- """
-
- def __init__(self, cfg):
- """
- Initializes the NoMemory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- pass
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory. No action is taken in NoMemory.
-
- Args:
- data: The data to add.
-
- Returns: An empty string.
- """
- return ""
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
-
- Returns: None
- """
- return None
-
- def clear(self) -> str:
- """
- Clears the memory. No action is taken in NoMemory.
-
- Returns: An empty string.
- """
- return ""
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: None
- """
- return None
-
- def get_stats(self):
- """
- Returns: An empty dictionary as there are no stats in NoMemory.
- """
- return {}
diff --git a/spaces/ramonpzg/music-recsys-app/app.py b/spaces/ramonpzg/music-recsys-app/app.py
deleted file mode 100644
index 598951b129ef70efaa67bc83c5210ad54e12c954..0000000000000000000000000000000000000000
--- a/spaces/ramonpzg/music-recsys-app/app.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import streamlit as st
-from qdrant_client import QdrantClient
-from transformers import pipeline
-from audiocraft.models import MusicGen
-import os
-import torch
-# import baseten
-
-st.title("Music Recommendation App")
-st.subheader("A :red[Generative AI]-to-Real Music Approach")
-
-st.markdown("""
-The purpose of this app is to help creative people explore the possibilities of Generative AI in the music
-domain, while comparing their creations to music made by people with all sorts of instruments.
-
-There are several moving parts to this app and the most important ones are `transformers`, `audiocraft`, and
-Qdrant for our vector database.
-""")
-
-client = QdrantClient(
- "https://394294d5-30bb-4958-ad1a-15a3561edce5.us-east-1-0.aws.cloud.qdrant.io:6333",
- api_key=os.environ['QDRANT_API_KEY'],
-)
-
-# classifier = baseten.deployed_model_id('20awxxq')
-classifier = pipeline("audio-classification", model="ramonpzg/wav2musicgenre")#.to(device)
-model = MusicGen.get_pretrained('small')
-
-val1 = st.slider("How many seconds?", 5.0, 30.0, value=5.0, step=0.5)
-
-model.set_generation_params(
- use_sampling=True,
- top_k=250,
- duration=val1
-)
-
-music_prompt = st.text_input(
- label="Music Prompt",
- value="Fast-paced bachata in the style of Romeo Santos."
-)
-
-if st.button("Generate Some Music!"):
- with st.spinner("Wait for it..."):
- output = model.generate(descriptions=[music_prompt],progress=True)[0, 0, :].cpu().numpy()
- st.success("Done! :)")
-
- st.audio(output, sample_rate=32000)
-
- genres = classifier(output)
-
- if genres:
- st.markdown("## Best Prediction")
- col1, col2 = st.columns(2, gap="small")
- col1.subheader(genres[0]['label'])
- col2.metric(label="Score", value=f"{genres[0]['score']*100:.2f}%")
-
- st.markdown("### Other Predictions")
- col3, col4 = st.columns(2, gap="small")
- for idx, genre in enumerate(genres[1:]):
- if idx % 2 == 0:
- col3.metric(label=genre['label'], value=f"{genre['score']*100:.2f}%")
- else:
- col4.metric(label=genre['label'], value=f"{genre['score']*100:.2f}%")
-
- features = classifier.feature_extractor(
- output, sampling_rate=16_000, return_tensors="pt", padding=True,
- return_attention_mask=True, max_length=16_000, truncation=True
- )
-
- with torch.no_grad():
- vectr = classifier.model(**features, output_hidden_states=True).hidden_states[-1].mean(dim=1)[0]
-
-
- results = client.search(
- collection_name="music_vectors",
- query_vector=vectr.tolist(),
- limit=10
- )
-
- st.markdown("## Real Recommendations")
-
- col5, col6 = st.columns(2)
-
- for idx, result in enumerate(results):
- if idx % 2 == 0:
- col5.header(f"Genre: {result.payload['genre']}")
- col5.markdown(f"### Artist: {result.payload['artist']}")
- col5.markdown(f"#### Song name: {result.payload['name']}")
- try:
- col5.audio(result.payload["urls"])
- except:
- continue
- else:
- col6.header(f"Genre: {result.payload['genre']}")
- col6.markdown(f"### Artist: {result.payload['artist']}")
- col6.markdown(f"#### Song name: {result.payload['name']}")
- try:
- col6.audio(result.payload["urls"])
- except:
- continue
\ No newline at end of file
diff --git a/spaces/realfill-library/RealFill-Training-UI/uploader.py b/spaces/realfill-library/RealFill-Training-UI/uploader.py
deleted file mode 100644
index 0ce697f0d47325a4d73f92c13304ae5f51df794a..0000000000000000000000000000000000000000
--- a/spaces/realfill-library/RealFill-Training-UI/uploader.py
+++ /dev/null
@@ -1,42 +0,0 @@
-from __future__ import annotations
-
-from huggingface_hub import HfApi
-
-
-class Uploader:
- def __init__(self, hf_token: str | None):
- self.api = HfApi(token=hf_token)
-
- def get_username(self) -> str:
- return self.api.whoami()['name']
-
- def upload(self,
- folder_path: str,
- repo_name: str,
- organization: str = '',
- repo_type: str = 'model',
- private: bool = True,
- delete_existing_repo: bool = False) -> str:
- if not folder_path:
- raise ValueError
- if not repo_name:
- raise ValueError
- if not organization:
- organization = self.get_username()
- repo_id = f'{organization}/{repo_name}'
- if delete_existing_repo:
- try:
- self.api.delete_repo(repo_id, repo_type=repo_type)
- except Exception:
- pass
- try:
- self.api.create_repo(repo_id, repo_type=repo_type, private=private)
- self.api.upload_folder(repo_id=repo_id,
- folder_path=folder_path,
- path_in_repo='.',
- repo_type=repo_type)
- url = f'https://huggingface.co/{repo_id}'
- message = f'Your model was successfully uploaded to {url} .'
- except Exception as e:
- message = str(e)
- return message
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Chef Damodaran Recipes Book In Tamil Pdf Download 2021.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Chef Damodaran Recipes Book In Tamil Pdf Download 2021.md
deleted file mode 100644
index cfe9445f634e3d1019ce715cd76be893d4b182d1..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Chef Damodaran Recipes Book In Tamil Pdf Download 2021.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-Chef Damodaran Recipes Book In Tamil Pdf Download: A Guide for Food Lovers
-If you are a food lover who wants to learn the secrets of cooking delicious dishes from different cuisines, you should definitely download Chef Damodaran's recipes book in Tamil pdf format. Chef Damodaran, also known as Dr. Chef Damu, is a renowned chef who has won many awards and accolades for his culinary skills. He has also hosted many popular TV shows and written several books on cooking. His recipes book in Tamil pdf format is a treasure trove of mouth-watering recipes that you can try at home.
-chef damodaran recipes book in tamil pdf download Download Zip ===== https://urlgoal.com/2uCK2k
-What is Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format is a collection of his best recipes from various sources, such as his TV shows, magazines, websites and books. The book contains more than 1000 recipes from different cuisines, such as Indian, Chinese, Continental, Thai, Mexican and more. The book also covers various categories of dishes, such as soups, salads, snacks, starters, main courses, desserts and more. The book is written in Tamil language and has clear instructions and pictures for each recipe.
-How to download Chef Damodaran's recipes book in Tamil pdf format?
-Downloading Chef Damodaran's recipes book in Tamil pdf format is very easy and simple. You just need to follow these steps:
-
-Go to the official website of Chef Damodaran at https://www.chefdamu.com/
-Click on the Books tab and select the book you want to download.
-Click on the Download button and enter your name and email address.
-You will receive a link to download the book in your email inbox.
-Click on the link and save the book on your device.
-
-Why should you download Chef Damodaran's recipes book in Tamil pdf format?
-There are many reasons why you should download Chef Damodaran's recipes book in Tamil pdf format. Some of them are:
-
-You can learn from a master chef who has years of experience and expertise in cooking.
-You can discover new and exciting dishes from different cuisines and cultures.
-You can impress your family and friends with your cooking skills and creativity.
-You can enjoy delicious and healthy food at home without spending too much money or time.
-You can access the book anytime and anywhere on your device without any hassle.
-
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a must-have for any food lover who wants to learn how to cook like a pro. The book contains a variety of recipes that are easy to follow and tasty to eat. You can download the book for free from Chef Damodaran's website and enjoy cooking at home.
-What are some of the recipes in Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format contains a wide range of recipes from different cuisines and categories. Some of the recipes are:
-
-
-Chicken Biryani: A classic and aromatic rice dish with chicken and spices.
-Tomato Soup: A simple and refreshing soup with tomatoes, garlic and herbs.
-Paneer Butter Masala: A rich and creamy curry with cottage cheese and butter.
-Vegetable Fried Rice: A quick and easy rice dish with mixed vegetables and soy sauce.
-Gulab Jamun: A popular and delicious dessert with deep-fried milk balls soaked in sugar syrup.
-
-What are some of the reviews of Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format has received many positive reviews from the readers and users. Some of the reviews are:
-
-"I love this book. It has so many recipes that are easy to follow and tasty to eat. I have tried many dishes from this book and they all turned out great. Chef Damodaran is a genius." - Priya
-"This book is a must-have for any food lover. It has recipes from different cuisines and cultures that are authentic and delicious. Chef Damodaran is a master chef who knows how to cook like a pro." - Rajesh
-"This book is a treasure trove of mouth-watering recipes that you can try at home. Chef Damodaran is a legend who has shared his secrets of cooking with us. I highly recommend this book to anyone who loves food." - Anitha
-
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a free and easy way to learn how to cook like a pro. The book contains more than 1000 recipes from different cuisines and categories that are simple, fast and powerful. You can download the book from Chef Damodaran's website and enjoy cooking at home.
-What are some of the benefits of Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format has many benefits for the users and readers. Some of the benefits are:
-
-You can learn from a master chef who has years of experience and expertise in cooking.
-You can discover new and exciting dishes from different cuisines and cultures.
-You can improve your health and wellness by eating nutritious and balanced food.
-You can save money and time by cooking at home instead of ordering or eating out.
-You can have fun and enjoyment by cooking with your family and friends.
-
-How to use Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format is very easy and simple to use. You just need to follow these steps:
-
-Open the book on your device using a PDF reader or a browser.
-Choose the recipe you want to try from the index or the table of contents.
-Read the ingredients, measurements, methods and tips for the recipe.
-Gather the required ingredients and utensils for the recipe.
-Follow the instructions and cook the recipe step by step.
-Serve and enjoy the dish with your loved ones.
-
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a free and easy way to learn how to cook like a pro. The book contains more than 1000 recipes from different cuisines and categories that are simple, fast and powerful. You can download the book from Chef Damodaran's website and enjoy cooking at home.
-What are some of the features of Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format has many features that make it a valuable and user-friendly resource for cooking enthusiasts. Some of the features are:
-
-The book is written in Tamil language, which makes it easy to understand and follow for Tamil speakers.
-The book has clear and colorful pictures for each recipe, which makes it attractive and appealing.
-The book has detailed and accurate measurements, methods and tips for each recipe, which makes it reliable and helpful.
-The book has a variety of recipes from different cuisines and categories, which makes it diverse and interesting.
-The book has an index and a table of contents, which makes it easy to navigate and find the recipe you want.
-
-How to get the best results from Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format is a great way to learn how to cook like a pro, but you also need to follow some guidelines and tips to get the best results from it. Here are some suggestions:
-
-Read the recipe carefully before you start cooking and make sure you have all the ingredients and utensils ready.
-Follow the instructions and measurements exactly as given in the recipe and do not make any changes or substitutions unless specified.
-Use fresh and good quality ingredients for better taste and nutrition.
-Cook on medium or low flame and do not overcook or undercook the food.
-Taste and adjust the seasoning as per your preference and serve hot or cold as per the recipe.
-
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a free and easy way to learn how to cook like a pro. The book contains more than 1000 recipes from different cuisines and categories that are simple, fast and powerful. You can download the book from Chef Damodaran's website and enjoy cooking at home.
-What are some of the testimonials of Chef Damodaran's recipes book in Tamil pdf format?
-Chef Damodaran's recipes book in Tamil pdf format has received many positive testimonials from the users and readers who have tried his recipes and enjoyed his cooking. Here are some of the testimonials:
-
-"I have downloaded Chef Damodaran's recipes book in Tamil pdf format and I am very happy with it. The recipes are easy to follow and the dishes are delicious. I have tried many recipes from different cuisines and they all turned out great. Chef Damodaran is a genius." - Priya
-"Chef Damodaran's recipes book in Tamil pdf format is a must-have for any food lover. It has recipes from different cuisines and cultures that are authentic and tasty. Chef Damodaran is a master chef who knows how to cook like a pro. I highly recommend this book to anyone who loves food." - Rajesh
-"Chef Damodaran's recipes book in Tamil pdf format is a treasure trove of mouth-watering recipes that you can try at home. Chef Damodaran is a legend who has shared his secrets of cooking with us. I have learned a lot from his book and improved my cooking skills. Thank you Chef Damodaran." - Anitha
-
-How to contact Chef Damodaran for feedback or queries?
-If you have any feedback or queries regarding Chef Damodaran's recipes book in Tamil pdf format, you can contact him through his official website or his social media accounts. Here are some of the ways to contact him:
-
-Website: https://www.chefdamu.com/
-Email: chef@chefdamu.com
-Facebook: https://www.facebook.com/chefdamuofficial/
-Twitter: https://twitter.com/chefdamu
-YouTube: https://www.youtube.com/user/chefdamuofficial
-
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a free and easy way to learn how to cook like a pro. The book contains more than 1000 recipes from different cuisines and categories that are simple, fast and powerful. You can download the book from Chef Damodaran's website and enjoy cooking at home.
-Conclusion
-Chef Damodaran's recipes book in Tamil pdf format is a free and open source software that can play most multimedia files as well as DVDs, Audio CDs, VCDs and various streaming protocols. It also offers many features and benefits for users, such as trimming and converting videos, recording audio, video, desktop and webcam, streaming and downloading videos, adding effects and filters, downloading subtitles, using keyboard shortcuts, customizing with skins and extensions and more. It is a simple, fast and powerful multimedia player that you should definitely try.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/richardblythman/stabilityai-stable-diffusion-2-1/README.md b/spaces/richardblythman/stabilityai-stable-diffusion-2-1/README.md
deleted file mode 100644
index 2a1d84c618d721b075998a9634dd0b6dbacbfc07..0000000000000000000000000000000000000000
--- a/spaces/richardblythman/stabilityai-stable-diffusion-2-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stabilityai Stable Diffusion 2 1
-emoji: 🦀
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.13.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py
deleted file mode 100644
index 56e2874a47566b740899b0cdc3f311c02f83ad50..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/iou_balanced_neg_sampler.py
+++ /dev/null
@@ -1,158 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numpy as np
-import torch
-
-from ..builder import BBOX_SAMPLERS
-from .random_sampler import RandomSampler
-
-
-@BBOX_SAMPLERS.register_module()
-class IoUBalancedNegSampler(RandomSampler):
- """IoU Balanced Sampling.
-
- arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019)
-
- Sampling proposals according to their IoU. `floor_fraction` of needed RoIs
- are sampled from proposals whose IoU are lower than `floor_thr` randomly.
- The others are sampled from proposals whose IoU are higher than
- `floor_thr`. These proposals are sampled from some bins evenly, which are
- split by `num_bins` via IoU evenly.
-
- Args:
- num (int): number of proposals.
- pos_fraction (float): fraction of positive proposals.
- floor_thr (float): threshold (minimum) IoU for IoU balanced sampling,
- set to -1 if all using IoU balanced sampling.
- floor_fraction (float): sampling fraction of proposals under floor_thr.
- num_bins (int): number of bins in IoU balanced sampling.
- """
-
- def __init__(self,
- num,
- pos_fraction,
- floor_thr=-1,
- floor_fraction=0,
- num_bins=3,
- **kwargs):
- super(IoUBalancedNegSampler, self).__init__(num, pos_fraction,
- **kwargs)
- assert floor_thr >= 0 or floor_thr == -1
- assert 0 <= floor_fraction <= 1
- assert num_bins >= 1
-
- self.floor_thr = floor_thr
- self.floor_fraction = floor_fraction
- self.num_bins = num_bins
-
- def sample_via_interval(self, max_overlaps, full_set, num_expected):
- """Sample according to the iou interval.
-
- Args:
- max_overlaps (torch.Tensor): IoU between bounding boxes and ground
- truth boxes.
- full_set (set(int)): A full set of indices of boxes。
- num_expected (int): Number of expected samples。
-
- Returns:
- np.ndarray: Indices of samples
- """
- max_iou = max_overlaps.max()
- iou_interval = (max_iou - self.floor_thr) / self.num_bins
- per_num_expected = int(num_expected / self.num_bins)
-
- sampled_inds = []
- for i in range(self.num_bins):
- start_iou = self.floor_thr + i * iou_interval
- end_iou = self.floor_thr + (i + 1) * iou_interval
- tmp_set = set(
- np.where(
- np.logical_and(max_overlaps >= start_iou,
- max_overlaps < end_iou))[0])
- tmp_inds = list(tmp_set & full_set)
- if len(tmp_inds) > per_num_expected:
- tmp_sampled_set = self.random_choice(tmp_inds,
- per_num_expected)
- else:
- tmp_sampled_set = np.array(tmp_inds, dtype=np.int)
- sampled_inds.append(tmp_sampled_set)
-
- sampled_inds = np.concatenate(sampled_inds)
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(full_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate([sampled_inds, extra_inds])
-
- return sampled_inds
-
- def _sample_neg(self, assign_result, num_expected, **kwargs):
- """Sample negative boxes.
-
- Args:
- assign_result (:obj:`AssignResult`): The assigned results of boxes.
- num_expected (int): The number of expected negative samples
-
- Returns:
- Tensor or ndarray: sampled indices.
- """
- neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False)
- if neg_inds.numel() != 0:
- neg_inds = neg_inds.squeeze(1)
- if len(neg_inds) <= num_expected:
- return neg_inds
- else:
- max_overlaps = assign_result.max_overlaps.cpu().numpy()
- # balance sampling for negative samples
- neg_set = set(neg_inds.cpu().numpy())
-
- if self.floor_thr > 0:
- floor_set = set(
- np.where(
- np.logical_and(max_overlaps >= 0,
- max_overlaps < self.floor_thr))[0])
- iou_sampling_set = set(
- np.where(max_overlaps >= self.floor_thr)[0])
- elif self.floor_thr == 0:
- floor_set = set(np.where(max_overlaps == 0)[0])
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- else:
- floor_set = set()
- iou_sampling_set = set(
- np.where(max_overlaps > self.floor_thr)[0])
- # for sampling interval calculation
- self.floor_thr = 0
-
- floor_neg_inds = list(floor_set & neg_set)
- iou_sampling_neg_inds = list(iou_sampling_set & neg_set)
- num_expected_iou_sampling = int(num_expected *
- (1 - self.floor_fraction))
- if len(iou_sampling_neg_inds) > num_expected_iou_sampling:
- if self.num_bins >= 2:
- iou_sampled_inds = self.sample_via_interval(
- max_overlaps, set(iou_sampling_neg_inds),
- num_expected_iou_sampling)
- else:
- iou_sampled_inds = self.random_choice(
- iou_sampling_neg_inds, num_expected_iou_sampling)
- else:
- iou_sampled_inds = np.array(
- iou_sampling_neg_inds, dtype=np.int)
- num_expected_floor = num_expected - len(iou_sampled_inds)
- if len(floor_neg_inds) > num_expected_floor:
- sampled_floor_inds = self.random_choice(
- floor_neg_inds, num_expected_floor)
- else:
- sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int)
- sampled_inds = np.concatenate(
- (sampled_floor_inds, iou_sampled_inds))
- if len(sampled_inds) < num_expected:
- num_extra = num_expected - len(sampled_inds)
- extra_inds = np.array(list(neg_set - set(sampled_inds)))
- if len(extra_inds) > num_extra:
- extra_inds = self.random_choice(extra_inds, num_extra)
- sampled_inds = np.concatenate((sampled_inds, extra_inds))
- sampled_inds = torch.from_numpy(sampled_inds).long().to(
- assign_result.gt_inds.device)
- return sampled_inds
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Human Zbuilder Plugin For 3ds Max (2009-2012).md b/spaces/rorallitri/biomedical-language-models/logs/Human Zbuilder Plugin For 3ds Max (2009-2012).md
deleted file mode 100644
index f36ebcae890bee04661b8831932b06b2555a46b9..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Human Zbuilder Plugin For 3ds Max (2009-2012).md
+++ /dev/null
@@ -1,6 +0,0 @@
-Human Zbuilder Plugin for 3ds Max (2009-2012) DOWNLOAD ••• https://tinurll.com/2uzobh
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rossellison/kpop-face-generator/stylegan3-fun/dnnlib/util.py b/spaces/rossellison/kpop-face-generator/stylegan3-fun/dnnlib/util.py
deleted file mode 100644
index 6bbdf3bd8fe1c138cd969d37dcc52190b45c4c16..0000000000000000000000000000000000000000
--- a/spaces/rossellison/kpop-face-generator/stylegan3-fun/dnnlib/util.py
+++ /dev/null
@@ -1,491 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Miscellaneous utility classes and functions."""
-
-import ctypes
-import fnmatch
-import importlib
-import inspect
-import numpy as np
-import os
-import shutil
-import sys
-import types
-import io
-import pickle
-import re
-import requests
-import html
-import hashlib
-import glob
-import tempfile
-import urllib
-import urllib.request
-import uuid
-
-from distutils.util import strtobool
-from typing import Any, List, Tuple, Union
-
-
-# Util classes
-# ------------------------------------------------------------------------------------------
-
-
-class EasyDict(dict):
- """Convenience class that behaves like a dict but allows access with the attribute syntax."""
-
- def __getattr__(self, name: str) -> Any:
- try:
- return self[name]
- except KeyError:
- raise AttributeError(name)
-
- def __setattr__(self, name: str, value: Any) -> None:
- self[name] = value
-
- def __delattr__(self, name: str) -> None:
- del self[name]
-
-
-class Logger(object):
- """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file."""
-
- def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True):
- self.file = None
-
- if file_name is not None:
- self.file = open(file_name, file_mode)
-
- self.should_flush = should_flush
- self.stdout = sys.stdout
- self.stderr = sys.stderr
-
- sys.stdout = self
- sys.stderr = self
-
- def __enter__(self) -> "Logger":
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- self.close()
-
- def write(self, text: Union[str, bytes]) -> None:
- """Write text to stdout (and a file) and optionally flush."""
- if isinstance(text, bytes):
- text = text.decode()
- if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash
- return
-
- if self.file is not None:
- self.file.write(text)
-
- self.stdout.write(text)
-
- if self.should_flush:
- self.flush()
-
- def flush(self) -> None:
- """Flush written text to both stdout and a file, if open."""
- if self.file is not None:
- self.file.flush()
-
- self.stdout.flush()
-
- def close(self) -> None:
- """Flush, close possible files, and remove stdout/stderr mirroring."""
- self.flush()
-
- # if using multiple loggers, prevent closing in wrong order
- if sys.stdout is self:
- sys.stdout = self.stdout
- if sys.stderr is self:
- sys.stderr = self.stderr
-
- if self.file is not None:
- self.file.close()
- self.file = None
-
-
-# Cache directories
-# ------------------------------------------------------------------------------------------
-
-_dnnlib_cache_dir = None
-
-def set_cache_dir(path: str) -> None:
- global _dnnlib_cache_dir
- _dnnlib_cache_dir = path
-
-def make_cache_dir_path(*paths: str) -> str:
- if _dnnlib_cache_dir is not None:
- return os.path.join(_dnnlib_cache_dir, *paths)
- if 'DNNLIB_CACHE_DIR' in os.environ:
- return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths)
- if 'HOME' in os.environ:
- return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths)
- if 'USERPROFILE' in os.environ:
- return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths)
- return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths)
-
-# Small util functions
-# ------------------------------------------------------------------------------------------
-
-
-def format_time(seconds: Union[int, float]) -> str:
- """Convert the seconds to human readable string with days, hours, minutes and seconds."""
- s = int(np.rint(seconds))
-
- if s < 60:
- return "{0}s".format(s)
- elif s < 60 * 60:
- return "{0}m {1:02}s".format(s // 60, s % 60)
- elif s < 24 * 60 * 60:
- return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60)
- else:
- return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60)
-
-
-def format_time_brief(seconds: Union[int, float]) -> str:
- """Convert the seconds to human readable string with days, hours, minutes and seconds."""
- s = int(np.rint(seconds))
-
- if s < 60:
- return "{0}s".format(s)
- elif s < 60 * 60:
- return "{0}m {1:02}s".format(s // 60, s % 60)
- elif s < 24 * 60 * 60:
- return "{0}h {1:02}m".format(s // (60 * 60), (s // 60) % 60)
- else:
- return "{0}d {1:02}h".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24)
-
-
-def ask_yes_no(question: str) -> bool:
- """Ask the user the question until the user inputs a valid answer."""
- while True:
- try:
- print("{0} [y/n]".format(question))
- return strtobool(input().lower())
- except ValueError:
- pass
-
-
-def tuple_product(t: Tuple) -> Any:
- """Calculate the product of the tuple elements."""
- result = 1
-
- for v in t:
- result *= v
-
- return result
-
-
-_str_to_ctype = {
- "uint8": ctypes.c_ubyte,
- "uint16": ctypes.c_uint16,
- "uint32": ctypes.c_uint32,
- "uint64": ctypes.c_uint64,
- "int8": ctypes.c_byte,
- "int16": ctypes.c_int16,
- "int32": ctypes.c_int32,
- "int64": ctypes.c_int64,
- "float32": ctypes.c_float,
- "float64": ctypes.c_double
-}
-
-
-def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]:
- """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes."""
- type_str = None
-
- if isinstance(type_obj, str):
- type_str = type_obj
- elif hasattr(type_obj, "__name__"):
- type_str = type_obj.__name__
- elif hasattr(type_obj, "name"):
- type_str = type_obj.name
- else:
- raise RuntimeError("Cannot infer type name from input")
-
- assert type_str in _str_to_ctype.keys()
-
- my_dtype = np.dtype(type_str)
- my_ctype = _str_to_ctype[type_str]
-
- assert my_dtype.itemsize == ctypes.sizeof(my_ctype)
-
- return my_dtype, my_ctype
-
-
-def is_pickleable(obj: Any) -> bool:
- try:
- with io.BytesIO() as stream:
- pickle.dump(obj, stream)
- return True
- except:
- return False
-
-
-# Functionality to import modules/objects by name, and call functions by name
-# ------------------------------------------------------------------------------------------
-
-def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]:
- """Searches for the underlying module behind the name to some python object.
- Returns the module and the object name (original name with module part removed)."""
-
- # allow convenience shorthands, substitute them by full names
- obj_name = re.sub("^np.", "numpy.", obj_name)
- obj_name = re.sub("^tf.", "tensorflow.", obj_name)
-
- # list alternatives for (module_name, local_obj_name)
- parts = obj_name.split(".")
- name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)]
-
- # try each alternative in turn
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- return module, local_obj_name
- except:
- pass
-
- # maybe some of the modules themselves contain errors?
- for module_name, _local_obj_name in name_pairs:
- try:
- importlib.import_module(module_name) # may raise ImportError
- except ImportError:
- if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"):
- raise
-
- # maybe the requested attribute is missing?
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- except ImportError:
- pass
-
- # we are out of luck, but we have no idea why
- raise ImportError(obj_name)
-
-
-def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any:
- """Traverses the object name and returns the last (rightmost) python object."""
- if obj_name == '':
- return module
- obj = module
- for part in obj_name.split("."):
- obj = getattr(obj, part)
- return obj
-
-
-def get_obj_by_name(name: str) -> Any:
- """Finds the python object with the given name."""
- module, obj_name = get_module_from_obj_name(name)
- return get_obj_from_module(module, obj_name)
-
-
-def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any:
- """Finds the python object with the given name and calls it as a function."""
- assert func_name is not None
- func_obj = get_obj_by_name(func_name)
- assert callable(func_obj)
- return func_obj(*args, **kwargs)
-
-
-def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any:
- """Finds the python class with the given name and constructs it with the given arguments."""
- return call_func_by_name(*args, func_name=class_name, **kwargs)
-
-
-def get_module_dir_by_obj_name(obj_name: str) -> str:
- """Get the directory path of the module containing the given object name."""
- module, _ = get_module_from_obj_name(obj_name)
- return os.path.dirname(inspect.getfile(module))
-
-
-def is_top_level_function(obj: Any) -> bool:
- """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'."""
- return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__
-
-
-def get_top_level_function_name(obj: Any) -> str:
- """Return the fully-qualified name of a top-level function."""
- assert is_top_level_function(obj)
- module = obj.__module__
- if module == '__main__':
- module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0]
- return module + "." + obj.__name__
-
-
-# File system helpers
-# ------------------------------------------------------------------------------------------
-
-def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]:
- """List all files recursively in a given directory while ignoring given file and directory names.
- Returns list of tuples containing both absolute and relative paths."""
- assert os.path.isdir(dir_path)
- base_name = os.path.basename(os.path.normpath(dir_path))
-
- if ignores is None:
- ignores = []
-
- result = []
-
- for root, dirs, files in os.walk(dir_path, topdown=True):
- for ignore_ in ignores:
- dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)]
-
- # dirs need to be edited in-place
- for d in dirs_to_remove:
- dirs.remove(d)
-
- files = [f for f in files if not fnmatch.fnmatch(f, ignore_)]
-
- absolute_paths = [os.path.join(root, f) for f in files]
- relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths]
-
- if add_base_to_relative:
- relative_paths = [os.path.join(base_name, p) for p in relative_paths]
-
- assert len(absolute_paths) == len(relative_paths)
- result += zip(absolute_paths, relative_paths)
-
- return result
-
-
-def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None:
- """Takes in a list of tuples of (src, dst) paths and copies files.
- Will create all necessary directories."""
- for file in files:
- target_dir_name = os.path.dirname(file[1])
-
- # will create all intermediate-level directories
- if not os.path.exists(target_dir_name):
- os.makedirs(target_dir_name)
-
- shutil.copyfile(file[0], file[1])
-
-
-# URL helpers
-# ------------------------------------------------------------------------------------------
-
-def is_url(obj: Any, allow_file_urls: bool = False) -> bool:
- """Determine whether the given object is a valid URL string."""
- if not isinstance(obj, str) or not "://" in obj:
- return False
- if allow_file_urls and obj.startswith('file://'):
- return True
- try:
- res = requests.compat.urlparse(obj)
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- res = requests.compat.urlparse(requests.compat.urljoin(obj, "/"))
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- except:
- return False
- return True
-
-
-def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any:
- """Download the given URL and return a binary-mode file object to access the data."""
- assert num_attempts >= 1
- assert not (return_filename and (not cache))
-
- # Doesn't look like an URL scheme so interpret it as a local filename.
- if not re.match('^[a-z]+://', url):
- return url if return_filename else open(url, "rb")
-
- # Handle file URLs. This code handles unusual file:// patterns that
- # arise on Windows:
- #
- # file:///c:/foo.txt
- #
- # which would translate to a local '/c:/foo.txt' filename that's
- # invalid. Drop the forward slash for such pathnames.
- #
- # If you touch this code path, you should test it on both Linux and
- # Windows.
- #
- # Some internet resources suggest using urllib.request.url2pathname() but
- # but that converts forward slashes to backslashes and this causes
- # its own set of problems.
- if url.startswith('file://'):
- filename = urllib.parse.urlparse(url).path
- if re.match(r'^/[a-zA-Z]:', filename):
- filename = filename[1:]
- return filename if return_filename else open(filename, "rb")
-
- assert is_url(url)
-
- # Lookup from cache.
- if cache_dir is None:
- cache_dir = make_cache_dir_path('downloads')
-
- url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest()
- if cache:
- cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*"))
- if len(cache_files) == 1:
- filename = cache_files[0]
- return filename if return_filename else open(filename, "rb")
-
- # Download.
- url_name = None
- url_data = None
- with requests.Session() as session:
- if verbose:
- print("Downloading %s ..." % url, end="", flush=True)
- for attempts_left in reversed(range(num_attempts)):
- try:
- with session.get(url) as res:
- res.raise_for_status()
- if len(res.content) == 0:
- raise IOError("No data received")
-
- if len(res.content) < 8192:
- content_str = res.content.decode("utf-8")
- if "download_warning" in res.headers.get("Set-Cookie", ""):
- links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link]
- if len(links) == 1:
- url = requests.compat.urljoin(url, links[0])
- raise IOError("Google Drive virus checker nag")
- if "Google Drive - Quota exceeded" in content_str:
- raise IOError("Google Drive download quota exceeded -- please try again later")
-
- match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", ""))
- url_name = match[1] if match else url
- url_data = res.content
- if verbose:
- print(" done")
- break
- except KeyboardInterrupt:
- raise
- except:
- if not attempts_left:
- if verbose:
- print(" failed")
- raise
- if verbose:
- print(".", end="", flush=True)
-
- # Save to cache.
- if cache:
- safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name)
- cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name)
- temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name)
- os.makedirs(cache_dir, exist_ok=True)
- with open(temp_file, "wb") as f:
- f.write(url_data)
- os.replace(temp_file, cache_file) # atomic
- if return_filename:
- return cache_file
-
- # Return data as file object.
- assert not return_filename
- return io.BytesIO(url_data)
diff --git a/spaces/runa91/bite_gradio/src/combined_model/loss_image_to_3d_refinement.py b/spaces/runa91/bite_gradio/src/combined_model/loss_image_to_3d_refinement.py
deleted file mode 100644
index 8b3b85001ca7457afd5cfab639094de69b3203a6..0000000000000000000000000000000000000000
--- a/spaces/runa91/bite_gradio/src/combined_model/loss_image_to_3d_refinement.py
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
-import torch
-import numpy as np
-import pickle as pkl
-
-import os
-import sys
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', 'src'))
-# from priors.pose_prior_35 import Prior
-# from priors.tiger_pose_prior.tiger_pose_prior import GaussianMixturePrior
-from priors.normalizing_flow_prior.normalizing_flow_prior import NormalizingFlowPrior
-from priors.shape_prior import ShapePrior
-from lifting_to_3d.utils.geometry_utils import rot6d_to_rotmat, batch_rot2aa, geodesic_loss_R
-from combined_model.loss_utils.loss_utils import leg_sideway_error, leg_torsion_error, tail_sideway_error, tail_torsion_error, spine_torsion_error, spine_sideway_error
-from combined_model.loss_utils.loss_utils_gc import LossGConMesh, calculate_plane_errors_batch
-
-from priors.shape_prior import ShapePrior
-from configs.SMAL_configs import SMAL_MODEL_CONFIG
-
-from priors.helper_3dcgmodel_loss import load_dog_betas_for_3dcgmodel_loss
-
-
-class LossRef(torch.nn.Module):
- def __init__(self, smal_model_type, data_info, nf_version=None):
- super(LossRef, self).__init__()
- self.criterion_regr = torch.nn.MSELoss() # takes the mean
- self.criterion_class = torch.nn.CrossEntropyLoss()
-
- class_weights_isflat = torch.tensor([12, 2])
- self.criterion_class_isflat = torch.nn.CrossEntropyLoss(weight=class_weights_isflat)
- self.criterion_l1 = torch.nn.L1Loss()
- self.geodesic_loss = geodesic_loss_R(reduction='mean')
- self.gc_loss_on_mesh = LossGConMesh()
- self.data_info = data_info
- self.smal_model_type = smal_model_type
- self.register_buffer('keypoint_weights', torch.tensor(data_info.keypoint_weights)[None, :])
- # if nf_version is not None:
- # self.normalizing_flow_pose_prior = NormalizingFlowPrior(nf_version=nf_version)
-
- self.smal_model_data_path = SMAL_MODEL_CONFIG[self.smal_model_type]['smal_model_data_path']
- self.shape_prior = ShapePrior(self.smal_model_data_path) # here we just need mean and cov
-
- remeshing_path = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/smal_data_remeshed/uniform_surface_sampling/my_smpl_39dogsnorm_Jr_4_dog_remesh4000_info.pkl'
- with open(remeshing_path, 'rb') as fp:
- self.remeshing_dict = pkl.load(fp)
- self.remeshing_relevant_faces = torch.tensor(self.remeshing_dict['smal_faces'][self.remeshing_dict['faceid_closest']], dtype=torch.long)
- self.remeshing_relevant_barys = torch.tensor(self.remeshing_dict['barys_closest'], dtype=torch.float32)
-
-
-
- # load 3d data for the unity dogs (an optional shape prior for 11 breeds)
- self.unity_smal_shape_prior_dogs = SMAL_MODEL_CONFIG[self.smal_model_type]['unity_smal_shape_prior_dogs']
- if self.unity_smal_shape_prior_dogs is not None:
- self.dog_betas_unity = load_dog_betas_for_3dcgmodel_loss(self.unity_smal_shape_prior_dogs, self.smal_model_type)
- else:
- self.dog_betas_unity = None
-
-
-
-
-
-
-
- def forward(self, output_ref, output_ref_comp, target_dict, weight_dict_ref):
- # output_reproj: ['vertices_smal', 'keyp_3d', 'keyp_2d', 'silh_image']
- # target_dict: ['index', 'center', 'scale', 'pts', 'tpts', 'target_weight']
- batch_size = output_ref['keyp_2d'].shape[0]
- loss_dict_temp = {}
-
- # loss on reprojected keypoints
- output_kp_resh = (output_ref['keyp_2d']).reshape((-1, 2))
- target_kp_resh = (target_dict['tpts'][:, :, :2] / 64. * (256. - 1)).reshape((-1, 2))
- weights_resh = target_dict['tpts'][:, :, 2].reshape((-1))
- keyp_w_resh = self.keypoint_weights.repeat((batch_size, 1)).reshape((-1))
- loss_dict_temp['keyp_ref'] = ((((output_kp_resh - target_kp_resh)[weights_resh>0]**2).sum(axis=1).sqrt()*weights_resh[weights_resh>0])*keyp_w_resh[weights_resh>0]).sum() / \
- max((weights_resh[weights_resh>0]*keyp_w_resh[weights_resh>0]).sum(), 1e-5)
-
- # loss on reprojected silhouette
- assert output_ref['silh'].shape == (target_dict['silh'][:, None, :, :]).shape
- silh_loss_type = 'default'
- if silh_loss_type == 'default':
- with torch.no_grad():
- thr_silh = 20
- diff = torch.norm(output_kp_resh - target_kp_resh, dim=1)
- diff_x = diff.reshape((batch_size, -1))
- weights_resh_x = weights_resh.reshape((batch_size, -1))
- unweighted_kp_mean_dist = (diff_x * weights_resh_x).sum(dim=1) / ((weights_resh_x).sum(dim=1)+1e-6)
- loss_silh_bs = ((output_ref['silh'] - target_dict['silh'][:, None, :, :]) ** 2).sum(axis=3).sum(axis=2).sum(axis=1) / (output_ref['silh'].shape[2]*output_ref['silh'].shape[3])
- loss_dict_temp['silh_ref'] = loss_silh_bs[unweighted_kp_mean_dist 0:
- if keep_smal_mesh:
- target_gc_class = target_dict['gc'][:, :, 0]
- gc_errors_plane = calculate_plane_errors_batch(output_ref['vertices_smal'], target_gc_class, target_dict['has_gc'], target_dict['has_gc_is_touching'])
- loss_dict_temp['gc_plane'] = torch.mean(gc_errors_plane)
- else: # use a uniformly sampled mesh
- target_gc_class = target_dict['gc'][:, :, 0]
- device = output_ref['vertices_smal'].device
- remeshing_relevant_faces = self.remeshing_relevant_faces.to(device)
- remeshing_relevant_barys = self.remeshing_relevant_barys.to(device)
-
- bs = output_ref['vertices_smal'].shape[0]
- # verts_remeshed = torch.einsum('ij,aijk->aik', remeshing_relevant_barys, output_ref['vertices_smal'][:, self.remeshing_relevant_faces])
- # sel_verts_comparison = output_ref['vertices_smal'][:, self.remeshing_relevant_faces]
- # verts_remeshed = torch.einsum('ij,aijk->aik', remeshing_relevant_barys, sel_verts_comparison)
- sel_verts = torch.index_select(output_ref['vertices_smal'], dim=1, index=remeshing_relevant_faces.reshape((-1))).reshape((bs, remeshing_relevant_faces.shape[0], 3, 3))
- verts_remeshed = torch.einsum('ij,aijk->aik', remeshing_relevant_barys, sel_verts)
- target_gc_class_remeshed = torch.einsum('ij,aij->ai', remeshing_relevant_barys, target_gc_class[:, self.remeshing_relevant_faces].to(device=device, dtype=torch.float32))
- target_gc_class_remeshed_prep = torch.round(target_gc_class_remeshed).to(torch.long)
- gc_errors_plane, gc_errors_under_plane = calculate_plane_errors_batch(verts_remeshed, target_gc_class_remeshed_prep, target_dict['has_gc'], target_dict['has_gc_is_touching'])
- loss_dict_temp['gc_plane'] = torch.mean(gc_errors_plane)
- loss_dict_temp['gc_blowplane'] = torch.mean(gc_errors_under_plane)
-
- # error on classification if the ground plane is flat
- if 'gc_isflat' in weight_dict_ref.keys():
- # import pdb; pdb.set_trace()
- self.criterion_class_isflat.to(device)
- loss_dict_temp['gc_isflat'] = self.criterion_class(output_ref['isflat'], target_dict['isflat'].to(device))
-
- # if we refine the shape WITHIN the refinement newtork (shaperef_type is not inexistent)
- # shape regularization
- # 'smal': loss on betas (pca coefficients), betas should be close to 0
- # 'limbs...' loss on selected betas_limbs
- device = output_ref_comp['ref_trans_notnorm'].device
- loss_shape_weighted_list = [torch.zeros((1), device=device).mean()]
- if 'shape_options' in weight_dict_ref.keys():
- for ind_sp, sp in enumerate(weight_dict_ref['shape_options']):
- weight_sp = weight_dict_ref['shape'][ind_sp]
- # self.logscale_part_list = ['legs_l', 'legs_f', 'tail_l', 'tail_f', 'ears_y', 'ears_l', 'head_l']
- if sp == 'smal':
- loss_shape_tmp = self.shape_prior(output_ref['betas'])
- elif sp == 'limbs':
- loss_shape_tmp = torch.mean((output_ref['betas_limbs'])**2)
- elif sp == 'limbs7':
- limb_coeffs_list = [0.01, 1, 0.1, 1, 1, 0.1, 2]
- limb_coeffs = torch.tensor(limb_coeffs_list).to(torch.float32).to(target_dict['tpts'].device)
- loss_shape_tmp = torch.mean((output_ref['betas_limbs'] * limb_coeffs[None, :])**2)
- else:
- raise NotImplementedError
- loss_shape_weighted_list.append(weight_sp * loss_shape_tmp)
- loss_shape_weighted = torch.stack((loss_shape_weighted_list)).sum()
-
-
-
-
-
- # 3D loss for dogs for which we have a unity model or toy figure
- loss_dict_temp['models3d'] = torch.zeros((1), device=device).mean().to(output_ref['betas'].device)
- if 'models3d' in weight_dict_ref.keys():
- if weight_dict_ref['models3d'] > 0:
- assert (self.dog_betas_unity is not None)
- if weight_dict_ref['models3d'] > 0:
- for ind_dog in range(target_dict['breed_index'].shape[0]):
- breed_index = np.asscalar(target_dict['breed_index'][ind_dog].detach().cpu().numpy())
- if breed_index in self.dog_betas_unity.keys():
- betas_target = self.dog_betas_unity[breed_index][:output_ref['betas'].shape[1]].to(output_ref['betas'].device)
- betas_output = output_ref['betas'][ind_dog, :]
- betas_limbs_output = output_ref['betas_limbs'][ind_dog, :]
- loss_dict_temp['models3d'] += ((betas_limbs_output**2).sum() + ((betas_output-betas_target)**2).sum()) / (output_ref['betas'].shape[1] + output_ref['betas_limbs'].shape[1])
- else:
- weight_dict_ref['models3d'] = 0.0
- else:
- weight_dict_ref['models3d'] = 0.0
-
-
-
-
-
-
-
-
-
-
-
- # weight the losses
- loss = torch.zeros((1)).mean().to(device=output_ref['keyp_2d'].device, dtype=output_ref['keyp_2d'].dtype)
- loss_dict = {}
- for loss_name in weight_dict_ref.keys():
- if not loss_name in ['shape', 'shape_options']:
- if weight_dict_ref[loss_name] > 0:
- loss_weighted = loss_dict_temp[loss_name] * weight_dict_ref[loss_name]
- loss_dict[loss_name] = loss_weighted.item()
- loss += loss_weighted
- loss += loss_shape_weighted
- loss_dict['loss'] = loss.item()
-
- return loss, loss_dict
-
-
diff --git a/spaces/sasaki-saku/www_www/greeting.md b/spaces/sasaki-saku/www_www/greeting.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/scedlatioru/img-to-music/example/Corel Videostudio 12 Activation Code NEW Keygen.md b/spaces/scedlatioru/img-to-music/example/Corel Videostudio 12 Activation Code NEW Keygen.md
deleted file mode 100644
index 9e13aa9f5f21f30ceb593b2707655d516da4e112..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Corel Videostudio 12 Activation Code NEW Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-corel videostudio 12 activation code keygen Download Zip » https://gohhs.com/2uEAyQ
-
-video thumbnail. Playing next. 1:12. COREL VIDEOSTUDIO PRO X6 + COREL DRAW X6 Å“ Keygen ... 1fdad05405
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Kode Aktivasi Camfrog Pro 6.3 Free Full Download Crack 21 !!INSTALL!!.md b/spaces/scedlatioru/img-to-music/example/Kode Aktivasi Camfrog Pro 6.3 Free Full Download Crack 21 !!INSTALL!!.md
deleted file mode 100644
index 98ab1b7c531989049cb6bb5810b55e6eaead5989..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Kode Aktivasi Camfrog Pro 6.3 Free Full Download Crack 21 !!INSTALL!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Kode Aktivasi Camfrog Pro 6.3 Free Full Download Crack 21 Download Zip ››››› https://gohhs.com/2uEAC1
-
-RarmaRadio Pro 2.69 3d Game Studio A8 8.40 Crack, Code aktivasi manual . ... free!. File: gamestudio a8 crack Dоwnlоаd spеed: 6 Mb/s Compression: Zip ... Studio A8 Full Crack Antivirus, Répondre en citant · 3d Game Studio A8 Full Crack Antivirus. ... Download Crack 3d Game Studio A8 with activation code keygen or. 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Xfer Records Cthulhu V1 03 WiN MAC OSX UNIONl.md b/spaces/scedlatioru/img-to-music/example/Xfer Records Cthulhu V1 03 WiN MAC OSX UNIONl.md
deleted file mode 100644
index 670a539c589b319dd8b8528b7fa565faf4dc1768..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Xfer Records Cthulhu V1 03 WiN MAC OSX UNIONl.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Xfer Records Cthulhu V1 03 WiN MAC OSX UNIONl Download 🔗 https://gohhs.com/2uEyTo
-
-Pathologic 2 Supporter Bundle Download For Pc [portable Edition]l ... Xfer Records Cthulhu V1 03 WiN MAC OSX UNIONl · Rime Berta ... 1fdad05405
-
-
-
diff --git a/spaces/scikit-learn/sentiment-analysis/app.py b/spaces/scikit-learn/sentiment-analysis/app.py
deleted file mode 100644
index c616c5bf306415f014fbaeed2ae4010193e778f8..0000000000000000000000000000000000000000
--- a/spaces/scikit-learn/sentiment-analysis/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import sklearn
-import gradio as gr
-import joblib
-
-pipe = joblib.load("./pipeline.pkl")
-inputs = [gr.Textbox(value = "I love this!")]
-outputs = [gr.Label(label = "Sentiment")]
-title = "Sentiment Analysis Classifier"
-description = "This is a sentiment classifier using longformer model with a logistic regression head. "
-def infer(inputs):
- predictions = pipe.predict_proba([inputs])
- label = {
- "negative":str(predictions[0][0]),
- "positive":str(predictions[0][1]),
- }
- return label
-gr.Interface(infer, inputs = inputs, outputs = outputs, title = title, description = description).launch()
\ No newline at end of file
diff --git a/spaces/segments-tobias/conex/espnet/bin/asr_recog.py b/spaces/segments-tobias/conex/espnet/bin/asr_recog.py
deleted file mode 100644
index dc7c64a76f187ff3b132076fc102e9bac67e311f..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/bin/asr_recog.py
+++ /dev/null
@@ -1,363 +0,0 @@
-#!/usr/bin/env python3
-# encoding: utf-8
-
-# Copyright 2017 Johns Hopkins University (Shinji Watanabe)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""End-to-end speech recognition model decoding script."""
-
-import configargparse
-import logging
-import os
-import random
-import sys
-
-import numpy as np
-
-from espnet.utils.cli_utils import strtobool
-
-# NOTE: you need this func to generate our sphinx doc
-
-
-def get_parser():
- """Get default arguments."""
- parser = configargparse.ArgumentParser(
- description="Transcribe text from speech using "
- "a speech recognition model on one CPU or GPU",
- config_file_parser_class=configargparse.YAMLConfigFileParser,
- formatter_class=configargparse.ArgumentDefaultsHelpFormatter,
- )
- # general configuration
- parser.add("--config", is_config_file=True, help="Config file path")
- parser.add(
- "--config2",
- is_config_file=True,
- help="Second config file path that overwrites the settings in `--config`",
- )
- parser.add(
- "--config3",
- is_config_file=True,
- help="Third config file path that overwrites the settings "
- "in `--config` and `--config2`",
- )
-
- parser.add_argument("--ngpu", type=int, default=0, help="Number of GPUs")
- parser.add_argument(
- "--dtype",
- choices=("float16", "float32", "float64"),
- default="float32",
- help="Float precision (only available in --api v2)",
- )
- parser.add_argument(
- "--backend",
- type=str,
- default="chainer",
- choices=["chainer", "pytorch"],
- help="Backend library",
- )
- parser.add_argument("--debugmode", type=int, default=1, help="Debugmode")
- parser.add_argument("--seed", type=int, default=1, help="Random seed")
- parser.add_argument("--verbose", "-V", type=int, default=1, help="Verbose option")
- parser.add_argument(
- "--batchsize",
- type=int,
- default=1,
- help="Batch size for beam search (0: means no batch processing)",
- )
- parser.add_argument(
- "--preprocess-conf",
- type=str,
- default=None,
- help="The configuration file for the pre-processing",
- )
- parser.add_argument(
- "--api",
- default="v1",
- choices=["v1", "v2"],
- help="Beam search APIs "
- "v1: Default API. It only supports the ASRInterface.recognize method "
- "and DefaultRNNLM. "
- "v2: Experimental API. It supports any models that implements ScorerInterface.",
- )
- # task related
- parser.add_argument(
- "--recog-json", type=str, help="Filename of recognition data (json)"
- )
- parser.add_argument(
- "--result-label",
- type=str,
- required=True,
- help="Filename of result label data (json)",
- )
- # model (parameter) related
- parser.add_argument(
- "--model", type=str, required=True, help="Model file parameters to read"
- )
- parser.add_argument(
- "--model-conf", type=str, default=None, help="Model config file"
- )
- parser.add_argument(
- "--num-spkrs",
- type=int,
- default=1,
- choices=[1, 2],
- help="Number of speakers in the speech",
- )
- parser.add_argument(
- "--num-encs", default=1, type=int, help="Number of encoders in the model."
- )
- # search related
- parser.add_argument("--nbest", type=int, default=1, help="Output N-best hypotheses")
- parser.add_argument("--beam-size", type=int, default=1, help="Beam size")
- parser.add_argument("--penalty", type=float, default=0.0, help="Incertion penalty")
- parser.add_argument(
- "--maxlenratio",
- type=float,
- default=0.0,
- help="""Input length ratio to obtain max output length.
- If maxlenratio=0.0 (default), it uses a end-detect function
- to automatically find maximum hypothesis lengths""",
- )
- parser.add_argument(
- "--minlenratio",
- type=float,
- default=0.0,
- help="Input length ratio to obtain min output length",
- )
- parser.add_argument(
- "--ctc-weight", type=float, default=0.0, help="CTC weight in joint decoding"
- )
- parser.add_argument(
- "--weights-ctc-dec",
- type=float,
- action="append",
- help="ctc weight assigned to each encoder during decoding."
- "[in multi-encoder mode only]",
- )
- parser.add_argument(
- "--ctc-window-margin",
- type=int,
- default=0,
- help="""Use CTC window with margin parameter to accelerate
- CTC/attention decoding especially on GPU. Smaller magin
- makes decoding faster, but may increase search errors.
- If margin=0 (default), this function is disabled""",
- )
- # transducer related
- parser.add_argument(
- "--search-type",
- type=str,
- default="default",
- choices=["default", "nsc", "tsd", "alsd"],
- help="""Type of beam search implementation to use during inference.
- Can be either: default beam search, n-step constrained beam search ("nsc"),
- time-synchronous decoding ("tsd") or alignment-length synchronous decoding
- ("alsd").
- Additional associated parameters: "nstep" + "prefix-alpha" (for nsc),
- "max-sym-exp" (for tsd) and "u-max" (for alsd)""",
- )
- parser.add_argument(
- "--nstep",
- type=int,
- default=1,
- help="Number of expansion steps allowed in NSC beam search.",
- )
- parser.add_argument(
- "--prefix-alpha",
- type=int,
- default=2,
- help="Length prefix difference allowed in NSC beam search.",
- )
- parser.add_argument(
- "--max-sym-exp",
- type=int,
- default=2,
- help="Number of symbol expansions allowed in TSD decoding.",
- )
- parser.add_argument(
- "--u-max",
- type=int,
- default=400,
- help="Length prefix difference allowed in ALSD beam search.",
- )
- parser.add_argument(
- "--score-norm",
- type=strtobool,
- nargs="?",
- default=True,
- help="Normalize transducer scores by length",
- )
- # rnnlm related
- parser.add_argument(
- "--rnnlm", type=str, default=None, help="RNNLM model file to read"
- )
- parser.add_argument(
- "--rnnlm-conf", type=str, default=None, help="RNNLM model config file to read"
- )
- parser.add_argument(
- "--word-rnnlm", type=str, default=None, help="Word RNNLM model file to read"
- )
- parser.add_argument(
- "--word-rnnlm-conf",
- type=str,
- default=None,
- help="Word RNNLM model config file to read",
- )
- parser.add_argument("--word-dict", type=str, default=None, help="Word list to read")
- parser.add_argument("--lm-weight", type=float, default=0.1, help="RNNLM weight")
- # ngram related
- parser.add_argument(
- "--ngram-model", type=str, default=None, help="ngram model file to read"
- )
- parser.add_argument("--ngram-weight", type=float, default=0.1, help="ngram weight")
- parser.add_argument(
- "--ngram-scorer",
- type=str,
- default="part",
- choices=("full", "part"),
- help="""if the ngram is set as a part scorer, similar with CTC scorer,
- ngram scorer only scores topK hypethesis.
- if the ngram is set as full scorer, ngram scorer scores all hypthesis
- the decoding speed of part scorer is musch faster than full one""",
- )
- # streaming related
- parser.add_argument(
- "--streaming-mode",
- type=str,
- default=None,
- choices=["window", "segment"],
- help="""Use streaming recognizer for inference.
- `--batchsize` must be set to 0 to enable this mode""",
- )
- parser.add_argument("--streaming-window", type=int, default=10, help="Window size")
- parser.add_argument(
- "--streaming-min-blank-dur",
- type=int,
- default=10,
- help="Minimum blank duration threshold",
- )
- parser.add_argument(
- "--streaming-onset-margin", type=int, default=1, help="Onset margin"
- )
- parser.add_argument(
- "--streaming-offset-margin", type=int, default=1, help="Offset margin"
- )
- # non-autoregressive related
- # Mask CTC related. See https://arxiv.org/abs/2005.08700 for the detail.
- parser.add_argument(
- "--maskctc-n-iterations",
- type=int,
- default=10,
- help="Number of decoding iterations."
- "For Mask CTC, set 0 to predict 1 mask/iter.",
- )
- parser.add_argument(
- "--maskctc-probability-threshold",
- type=float,
- default=0.999,
- help="Threshold probability for CTC output",
- )
-
- return parser
-
-
-def main(args):
- """Run the main decoding function."""
- parser = get_parser()
- args = parser.parse_args(args)
-
- if args.ngpu == 0 and args.dtype == "float16":
- raise ValueError(f"--dtype {args.dtype} does not support the CPU backend.")
-
- # logging info
- if args.verbose == 1:
- logging.basicConfig(
- level=logging.INFO,
- format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s",
- )
- elif args.verbose == 2:
- logging.basicConfig(
- level=logging.DEBUG,
- format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s",
- )
- else:
- logging.basicConfig(
- level=logging.WARN,
- format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s",
- )
- logging.warning("Skip DEBUG/INFO messages")
-
- # check CUDA_VISIBLE_DEVICES
- if args.ngpu > 0:
- cvd = os.environ.get("CUDA_VISIBLE_DEVICES")
- if cvd is None:
- logging.warning("CUDA_VISIBLE_DEVICES is not set.")
- elif args.ngpu != len(cvd.split(",")):
- logging.error("#gpus is not matched with CUDA_VISIBLE_DEVICES.")
- sys.exit(1)
-
- # TODO(mn5k): support of multiple GPUs
- if args.ngpu > 1:
- logging.error("The program only supports ngpu=1.")
- sys.exit(1)
-
- # display PYTHONPATH
- logging.info("python path = " + os.environ.get("PYTHONPATH", "(None)"))
-
- # seed setting
- random.seed(args.seed)
- np.random.seed(args.seed)
- logging.info("set random seed = %d" % args.seed)
-
- # validate rnn options
- if args.rnnlm is not None and args.word_rnnlm is not None:
- logging.error(
- "It seems that both --rnnlm and --word-rnnlm are specified. "
- "Please use either option."
- )
- sys.exit(1)
-
- # recog
- logging.info("backend = " + args.backend)
- if args.num_spkrs == 1:
- if args.backend == "chainer":
- from espnet.asr.chainer_backend.asr import recog
-
- recog(args)
- elif args.backend == "pytorch":
- if args.num_encs == 1:
- # Experimental API that supports custom LMs
- if args.api == "v2":
- from espnet.asr.pytorch_backend.recog import recog_v2
-
- recog_v2(args)
- else:
- from espnet.asr.pytorch_backend.asr import recog
-
- if args.dtype != "float32":
- raise NotImplementedError(
- f"`--dtype {args.dtype}` is only available with `--api v2`"
- )
- recog(args)
- else:
- if args.api == "v2":
- raise NotImplementedError(
- f"--num-encs {args.num_encs} > 1 is not supported in --api v2"
- )
- else:
- from espnet.asr.pytorch_backend.asr import recog
-
- recog(args)
- else:
- raise ValueError("Only chainer and pytorch are supported.")
- elif args.num_spkrs == 2:
- if args.backend == "pytorch":
- from espnet.asr.pytorch_backend.asr_mix import recog
-
- recog(args)
- else:
- raise ValueError("Only pytorch is supported.")
-
-
-if __name__ == "__main__":
- main(sys.argv[1:])
diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/rnn/encoders.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/rnn/encoders.py
deleted file mode 100644
index e534c144860688963c5106d0147348511f38cdb4..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/rnn/encoders.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import logging
-import six
-
-import chainer
-import chainer.functions as F
-import chainer.links as L
-import numpy as np
-
-from chainer import cuda
-
-from espnet.nets.chainer_backend.nets_utils import _subsamplex
-from espnet.nets.e2e_asr_common import get_vgg2l_odim
-
-
-# TODO(watanabe) explanation of BLSTMP
-class RNNP(chainer.Chain):
- """RNN with projection layer module.
-
- Args:
- idim (int): Dimension of inputs.
- elayers (int): Number of encoder layers.
- cdim (int): Number of rnn units. (resulted in cdim * 2 if bidirectional)
- hdim (int): Number of projection units.
- subsample (np.ndarray): List to use sabsample the input array.
- dropout (float): Dropout rate.
- typ (str): The RNN type.
-
- """
-
- def __init__(self, idim, elayers, cdim, hdim, subsample, dropout, typ="blstm"):
- super(RNNP, self).__init__()
- bidir = typ[0] == "b"
- if bidir:
- rnn = L.NStepBiLSTM if "lstm" in typ else L.NStepBiGRU
- else:
- rnn = L.NStepLSTM if "lstm" in typ else L.NStepGRU
- rnn_label = "birnn" if bidir else "rnn"
- with self.init_scope():
- for i in six.moves.range(elayers):
- if i == 0:
- inputdim = idim
- else:
- inputdim = hdim
- _cdim = 2 * cdim if bidir else cdim
- # bottleneck layer to merge
- setattr(
- self, "{}{:d}".format(rnn_label, i), rnn(1, inputdim, cdim, dropout)
- )
- setattr(self, "bt%d" % i, L.Linear(_cdim, hdim))
-
- self.elayers = elayers
- self.rnn_label = rnn_label
- self.cdim = cdim
- self.subsample = subsample
- self.typ = typ
- self.bidir = bidir
-
- def __call__(self, xs, ilens):
- """RNNP forward.
-
- Args:
- xs (chainer.Variable): Batch of padded charactor ids. (B, Tmax)
- ilens (chainer.Variable): Batch of length of each input batch. (B,)
-
- Returns:
- xs (chainer.Variable):subsampled vector of xs.
- chainer.Variable: Subsampled vector of ilens.
-
- """
- logging.info(self.__class__.__name__ + " input lengths: " + str(ilens))
-
- for layer in six.moves.range(self.elayers):
- if "lstm" in self.typ:
- _, _, ys = self[self.rnn_label + str(layer)](None, None, xs)
- else:
- _, ys = self[self.rnn_label + str(layer)](None, xs)
- # ys: utt list of frame x cdim x 2 (2: means bidirectional)
- # TODO(watanabe) replace subsample and FC layer with CNN
- ys, ilens = _subsamplex(ys, self.subsample[layer + 1])
- # (sum _utt frame_utt) x dim
- ys = self["bt" + str(layer)](F.vstack(ys))
- xs = F.split_axis(ys, np.cumsum(ilens[:-1]), axis=0)
-
- # final tanh operation
- xs = F.split_axis(F.tanh(F.vstack(xs)), np.cumsum(ilens[:-1]), axis=0)
-
- # 1 utterance case, it becomes an array, so need to make a utt tuple
- if not isinstance(xs, tuple):
- xs = [xs]
-
- return xs, ilens # x: utt list of frame x dim
-
-
-class RNN(chainer.Chain):
- """RNN Module.
-
- Args:
- idim (int): Dimension of the imput.
- elayers (int): Number of encoder layers.
- cdim (int): Number of rnn units.
- hdim (int): Number of projection units.
- dropout (float): Dropout rate.
- typ (str): Rnn type.
-
- """
-
- def __init__(self, idim, elayers, cdim, hdim, dropout, typ="lstm"):
- super(RNN, self).__init__()
- bidir = typ[0] == "b"
- if bidir:
- rnn = L.NStepBiLSTM if "lstm" in typ else L.NStepBiGRU
- else:
- rnn = L.NStepLSTM if "lstm" in typ else L.NStepGRU
- _cdim = 2 * cdim if bidir else cdim
- with self.init_scope():
- self.nbrnn = rnn(elayers, idim, cdim, dropout)
- self.l_last = L.Linear(_cdim, hdim)
- self.typ = typ
- self.bidir = bidir
-
- def __call__(self, xs, ilens):
- """BRNN forward propagation.
-
- Args:
- xs (chainer.Variable): Batch of padded charactor ids. (B, Tmax)
- ilens (chainer.Variable): Batch of length of each input batch. (B,)
-
- Returns:
- tuple(chainer.Variable): Tuple of `chainer.Variable` objects.
- chainer.Variable: `ilens` .
-
- """
- logging.info(self.__class__.__name__ + " input lengths: " + str(ilens))
- # need to move ilens to cpu
- ilens = cuda.to_cpu(ilens)
-
- if "lstm" in self.typ:
- _, _, ys = self.nbrnn(None, None, xs)
- else:
- _, ys = self.nbrnn(None, xs)
- ys = self.l_last(F.vstack(ys)) # (sum _utt frame_utt) x dim
- xs = F.split_axis(ys, np.cumsum(ilens[:-1]), axis=0)
-
- # final tanh operation
- xs = F.split_axis(F.tanh(F.vstack(xs)), np.cumsum(ilens[:-1]), axis=0)
-
- # 1 utterance case, it becomes an array, so need to make a utt tuple
- if not isinstance(xs, tuple):
- xs = [xs]
-
- return xs, ilens # x: utt list of frame x dim
-
-
-# TODO(watanabe) explanation of VGG2L, VGG2B (Block) might be better
-class VGG2L(chainer.Chain):
- """VGG motibated cnn layers.
-
- Args:
- in_channel (int): Number of channels.
-
- """
-
- def __init__(self, in_channel=1):
- super(VGG2L, self).__init__()
- with self.init_scope():
- # CNN layer (VGG motivated)
- self.conv1_1 = L.Convolution2D(in_channel, 64, 3, stride=1, pad=1)
- self.conv1_2 = L.Convolution2D(64, 64, 3, stride=1, pad=1)
- self.conv2_1 = L.Convolution2D(64, 128, 3, stride=1, pad=1)
- self.conv2_2 = L.Convolution2D(128, 128, 3, stride=1, pad=1)
-
- self.in_channel = in_channel
-
- def __call__(self, xs, ilens):
- """VGG2L forward propagation.
-
- Args:
- xs (chainer.Variable): Batch of padded charactor ids. (B, Tmax)
- ilens (chainer.Variable): Batch of length of each features. (B,)
-
- Returns:
- chainer.Variable: Subsampled vector of xs.
- chainer.Variable: Subsampled vector of ilens.
-
- """
- logging.info(self.__class__.__name__ + " input lengths: " + str(ilens))
-
- # x: utt x frame x dim
- xs = F.pad_sequence(xs)
-
- # x: utt x 1 (input channel num) x frame x dim
- xs = F.swapaxes(
- xs.reshape(
- xs.shape[0],
- xs.shape[1],
- self.in_channel,
- xs.shape[2] // self.in_channel,
- ),
- 1,
- 2,
- )
-
- xs = F.relu(self.conv1_1(xs))
- xs = F.relu(self.conv1_2(xs))
- xs = F.max_pooling_2d(xs, 2, stride=2)
-
- xs = F.relu(self.conv2_1(xs))
- xs = F.relu(self.conv2_2(xs))
- xs = F.max_pooling_2d(xs, 2, stride=2)
-
- # change ilens accordingly
- ilens = self.xp.array(
- self.xp.ceil(self.xp.array(ilens, dtype=np.float32) / 2), dtype=np.int32
- )
- ilens = self.xp.array(
- self.xp.ceil(self.xp.array(ilens, dtype=np.float32) / 2), dtype=np.int32
- )
-
- # x: utt_list of frame (remove zeropaded frames) x (input channel num x dim)
- xs = F.swapaxes(xs, 1, 2)
- xs = xs.reshape(xs.shape[0], xs.shape[1], xs.shape[2] * xs.shape[3])
- xs = [xs[i, : ilens[i], :] for i in range(len(ilens))]
-
- return xs, ilens
-
-
-class Encoder(chainer.Chain):
- """Encoder network class.
-
- Args:
- etype (str): Type of encoder network.
- idim (int): Number of dimensions of encoder network.
- elayers (int): Number of layers of encoder network.
- eunits (int): Number of lstm units of encoder network.
- eprojs (int): Number of projection units of encoder network.
- subsample (np.array): Subsampling number. e.g. 1_2_2_2_1
- dropout (float): Dropout rate.
-
- """
-
- def __init__(
- self, etype, idim, elayers, eunits, eprojs, subsample, dropout, in_channel=1
- ):
- super(Encoder, self).__init__()
- typ = etype.lstrip("vgg").rstrip("p")
- if typ not in ["lstm", "gru", "blstm", "bgru"]:
- logging.error("Error: need to specify an appropriate encoder architecture")
- with self.init_scope():
- if etype.startswith("vgg"):
- if etype[-1] == "p":
- self.enc = chainer.Sequential(
- VGG2L(in_channel),
- RNNP(
- get_vgg2l_odim(idim, in_channel=in_channel),
- elayers,
- eunits,
- eprojs,
- subsample,
- dropout,
- typ=typ,
- ),
- )
- logging.info("Use CNN-VGG + " + typ.upper() + "P for encoder")
- else:
- self.enc = chainer.Sequential(
- VGG2L(in_channel),
- RNN(
- get_vgg2l_odim(idim, in_channel=in_channel),
- elayers,
- eunits,
- eprojs,
- dropout,
- typ=typ,
- ),
- )
- logging.info("Use CNN-VGG + " + typ.upper() + " for encoder")
- self.conv_subsampling_factor = 4
- else:
- if etype[-1] == "p":
- self.enc = chainer.Sequential(
- RNNP(idim, elayers, eunits, eprojs, subsample, dropout, typ=typ)
- )
- logging.info(
- typ.upper() + " with every-layer projection for encoder"
- )
- else:
- self.enc = chainer.Sequential(
- RNN(idim, elayers, eunits, eprojs, dropout, typ=typ)
- )
- logging.info(typ.upper() + " without projection for encoder")
- self.conv_subsampling_factor = 1
-
- def __call__(self, xs, ilens):
- """Encoder forward.
-
- Args:
- xs (chainer.Variable): Batch of padded charactor ids. (B, Tmax)
- ilens (chainer.variable): Batch of length of each features. (B,)
-
- Returns:
- chainer.Variable: Output of the encoder.
- chainer.Variable: (Subsampled) vector of ilens.
-
- """
- xs, ilens = self.enc(xs, ilens)
-
- return xs, ilens
-
-
-def encoder_for(args, idim, subsample):
- """Return the Encoder module.
-
- Args:
- idim (int): Dimension of input array.
- subsample (numpy.array): Subsample number. egs).1_2_2_2_1
-
- Return
- chainer.nn.Module: Encoder module.
-
- """
- return Encoder(
- args.etype,
- idim,
- args.elayers,
- args.eunits,
- args.eprojs,
- subsample,
- args.dropout_rate,
- )
diff --git a/spaces/sgxz/bingo/src/components/voice.tsx b/spaces/sgxz/bingo/src/components/voice.tsx
deleted file mode 100644
index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000
--- a/spaces/sgxz/bingo/src/components/voice.tsx
+++ /dev/null
@@ -1,52 +0,0 @@
-import React, { useEffect } from 'react'
-import { useSetAtom } from 'jotai'
-import { useBing } from '@/lib/hooks/use-bing'
-import Image from 'next/image'
-import VoiceIcon from '@/assets/images/voice.svg'
-import VoiceButton from './ui/voice'
-import { SR } from '@/lib/bots/bing/sr'
-import { voiceListenAtom } from '@/state'
-
-const sr = new SR(['发送', '清空', '退出'])
-
-const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => {
- const setListen = useSetAtom(voiceListenAtom)
- useEffect(() => {
- if (sr.listening) return
- sr.transcript = !isSpeaking
- }, [isSpeaking])
-
- useEffect(() => {
- sr.onchange = (msg: string, command?: string) => {
- switch (command) {
- case '退出':
- sr.stop()
- break;
- case '发送':
- sendMessage(input)
- case '清空':
- setInput('')
- break;
- default:
- setInput(input + msg)
- }
- }
- }, [input])
-
- const switchSR = (enable: boolean = false) => {
- setListen(enable)
- if (enable) {
- sr.start()
- } else {
- sr.stop()
- }
- }
-
- return sr.listening ? (
- switchSR(false)} />
- ) : (
- switchSR(true)} />
- )
-};
-
-export default Voice;
diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/training_tips_ja.md b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/training_tips_ja.md
deleted file mode 100644
index c5b06f2fdaa603a690c51ee2b79daecc4305fbd5..0000000000000000000000000000000000000000
--- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/docs/training_tips_ja.md
+++ /dev/null
@@ -1,64 +0,0 @@
-RVCの訓練における説明、およびTIPS
-===============================
-本TIPSではどのようにデータの訓練が行われているかを説明します。
-
-# 訓練の流れ
-GUIの訓練タブのstepに沿って説明します。
-
-## step1
-実験名の設定を行います。
-
-また、モデルに音高ガイド(ピッチ)を考慮させるかもここで設定できます。考慮させない場合はモデルは軽量になりますが、歌唱には向かなくなります。
-
-各実験のデータは`/logs/実験名/`に配置されます。
-
-## step2a
-音声の読み込みと前処理を行います。
-
-### load audio
-音声のあるフォルダを指定すると、そのフォルダ内にある音声ファイルを自動で読み込みます。
-例えば`C:Users\hoge\voices`を指定した場合、`C:Users\hoge\voices\voice.mp3`は読み込まれますが、`C:Users\hoge\voices\dir\voice.mp3`は読み込まれません。
-
-音声の読み込みには内部でffmpegを利用しているので、ffmpegで対応している拡張子であれば自動的に読み込まれます。
-ffmpegでint16に変換した後、float32に変換し、-1 ~ 1の間に正規化されます。
-
-### denoising
-音声についてscipyのfiltfiltによる平滑化を行います。
-
-### 音声の分割
-入力した音声はまず、一定期間(max_sil_kept=5秒?)より長く無音が続く部分を検知して音声を分割します。無音で音声を分割した後は、0.3秒のoverlapを含む4秒ごとに音声を分割します。4秒以内に区切られた音声は、音量の正規化を行った後wavファイルを`/logs/実験名/0_gt_wavs`に、そこから16kのサンプリングレートに変換して`/logs/実験名/1_16k_wavs`にwavファイルで保存します。
-
-## step2b
-### ピッチの抽出
-wavファイルからピッチ(音の高低)の情報を抽出します。parselmouthやpyworldに内蔵されている手法でピッチ情報(=f0)を抽出し、`/logs/実験名/2a_f0`に保存します。その後、ピッチ情報を対数で変換して1~255の整数に変換し、`/logs/実験名/2b-f0nsf`に保存します。
-
-### feature_printの抽出
-HuBERTを用いてwavファイルを事前にembeddingに変換します。`/logs/実験名/1_16k_wavs`に保存したwavファイルを読み込み、HuBERTでwavファイルを256次元の特徴量に変換し、npy形式で`/logs/実験名/3_feature256`に保存します。
-
-## step3
-モデルのトレーニングを行います。
-### 初心者向け用語解説
-深層学習ではデータセットを分割し、少しずつ学習を進めていきます。一回のモデルの更新(step)では、batch_size個のデータを取り出し予測と誤差の修正を行います。これをデータセットに対して一通り行うと一epochと数えます。
-
-そのため、学習時間は 1step当たりの学習時間 x (データセット内のデータ数 ÷ バッチサイズ) x epoch数 かかります。一般にバッチサイズを大きくするほど学習は安定し、(1step当たりの学習時間÷バッチサイズ)は小さくなりますが、その分GPUのメモリを多く使用します。GPUのRAMはnvidia-smiコマンド等で確認できます。実行環境のマシンに合わせてバッチサイズをできるだけ大きくするとより短時間で学習が可能です。
-
-### pretrained modelの指定
-RVCではモデルの訓練を0からではなく、事前学習済みの重みから開始するため、少ないデータセットで学習を行えます。
-
-デフォルトでは
-
-- 音高ガイドを考慮する場合、`RVCのある場所/pretrained/f0G40k.pth`と`RVCのある場所/pretrained/f0D40k.pth`を読み込みます。
-- 音高ガイドを考慮しない場合、`RVCのある場所/pretrained/G40k.pth`と`RVCのある場所/pretrained/D40k.pth`を読み込みます。
-
-学習時はsave_every_epochごとにモデルのパラメータが`logs/実験名/G_{}.pth`と`logs/実験名/D_{}.pth`に保存されますが、このパスを指定することで学習を再開したり、もしくは違う実験で学習したモデルの重みから学習を開始できます。
-
-### indexの学習
-RVCでは学習時に使われたHuBERTの特徴量を保存し、推論時は学習時の特徴量から近い特徴量を探してきて推論を行います。この検索を高速に行うために事前にindexの学習を行います。
-indexの学習には近似近傍探索ライブラリのfaissを用います。`/logs/実験名/3_feature256`の特徴量を読み込み、それを用いて学習したindexを`/logs/実験名/add_XXX.index`として保存します。
-(20230428updateよりtotal_fea.npyはindexから読み込むので不要になりました。)
-
-### ボタンの説明
-- モデルのトレーニング: step2bまでを実行した後、このボタンを押すとモデルの学習を行います。
-- 特徴インデックスのトレーニング: モデルのトレーニング後、indexの学習を行います。
-- ワンクリックトレーニング: step2bまでとモデルのトレーニング、特徴インデックスのトレーニングを一括で行います。
-
diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/datasets/transforms.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/datasets/transforms.py
deleted file mode 100644
index 91cf9269e4b31008a3ddca34a19b038a9b399991..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/datasets/transforms.py
+++ /dev/null
@@ -1,311 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Transforms and data augmentation for both image + bbox.
-"""
-import os
-import random
-
-import PIL
-import torch
-import torchvision.transforms as T
-import torchvision.transforms.functional as F
-
-from groundingdino.util.box_ops import box_xyxy_to_cxcywh
-from groundingdino.util.misc import interpolate
-
-
-def crop(image, target, region):
- cropped_image = F.crop(image, *region)
-
- target = target.copy()
- i, j, h, w = region
-
- # should we do something wrt the original size?
- target["size"] = torch.tensor([h, w])
-
- fields = ["labels", "area", "iscrowd", "positive_map"]
-
- if "boxes" in target:
- boxes = target["boxes"]
- max_size = torch.as_tensor([w, h], dtype=torch.float32)
- cropped_boxes = boxes - torch.as_tensor([j, i, j, i])
- cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size)
- cropped_boxes = cropped_boxes.clamp(min=0)
- area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1)
- target["boxes"] = cropped_boxes.reshape(-1, 4)
- target["area"] = area
- fields.append("boxes")
-
- if "masks" in target:
- # FIXME should we update the area here if there are no boxes?
- target["masks"] = target["masks"][:, i : i + h, j : j + w]
- fields.append("masks")
-
- # remove elements for which the boxes or masks that have zero area
- if "boxes" in target or "masks" in target:
- # favor boxes selection when defining which elements to keep
- # this is compatible with previous implementation
- if "boxes" in target:
- cropped_boxes = target["boxes"].reshape(-1, 2, 2)
- keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1)
- else:
- keep = target["masks"].flatten(1).any(1)
-
- for field in fields:
- if field in target:
- target[field] = target[field][keep]
-
- if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO":
- # for debug and visualization only.
- if "strings_positive" in target:
- target["strings_positive"] = [
- _i for _i, _j in zip(target["strings_positive"], keep) if _j
- ]
-
- return cropped_image, target
-
-
-def hflip(image, target):
- flipped_image = F.hflip(image)
-
- w, h = image.size
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor(
- [w, 0, w, 0]
- )
- target["boxes"] = boxes
-
- if "masks" in target:
- target["masks"] = target["masks"].flip(-1)
-
- return flipped_image, target
-
-
-def resize(image, target, size, max_size=None):
- # size can be min_size (scalar) or (w, h) tuple
-
- def get_size_with_aspect_ratio(image_size, size, max_size=None):
- w, h = image_size
- if max_size is not None:
- min_original_size = float(min((w, h)))
- max_original_size = float(max((w, h)))
- if max_original_size / min_original_size * size > max_size:
- size = int(round(max_size * min_original_size / max_original_size))
-
- if (w <= h and w == size) or (h <= w and h == size):
- return (h, w)
-
- if w < h:
- ow = size
- oh = int(size * h / w)
- else:
- oh = size
- ow = int(size * w / h)
-
- return (oh, ow)
-
- def get_size(image_size, size, max_size=None):
- if isinstance(size, (list, tuple)):
- return size[::-1]
- else:
- return get_size_with_aspect_ratio(image_size, size, max_size)
-
- size = get_size(image.size, size, max_size)
- rescaled_image = F.resize(image, size)
-
- if target is None:
- return rescaled_image, None
-
- ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size))
- ratio_width, ratio_height = ratios
-
- target = target.copy()
- if "boxes" in target:
- boxes = target["boxes"]
- scaled_boxes = boxes * torch.as_tensor(
- [ratio_width, ratio_height, ratio_width, ratio_height]
- )
- target["boxes"] = scaled_boxes
-
- if "area" in target:
- area = target["area"]
- scaled_area = area * (ratio_width * ratio_height)
- target["area"] = scaled_area
-
- h, w = size
- target["size"] = torch.tensor([h, w])
-
- if "masks" in target:
- target["masks"] = (
- interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5
- )
-
- return rescaled_image, target
-
-
-def pad(image, target, padding):
- # assumes that we only pad on the bottom right corners
- padded_image = F.pad(image, (0, 0, padding[0], padding[1]))
- if target is None:
- return padded_image, None
- target = target.copy()
- # should we do something wrt the original size?
- target["size"] = torch.tensor(padded_image.size[::-1])
- if "masks" in target:
- target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1]))
- return padded_image, target
-
-
-class ResizeDebug(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- return resize(img, target, self.size)
-
-
-class RandomCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- region = T.RandomCrop.get_params(img, self.size)
- return crop(img, target, region)
-
-
-class RandomSizeCrop(object):
- def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False):
- # respect_boxes: True to keep all boxes
- # False to tolerence box filter
- self.min_size = min_size
- self.max_size = max_size
- self.respect_boxes = respect_boxes
-
- def __call__(self, img: PIL.Image.Image, target: dict):
- init_boxes = len(target["boxes"])
- max_patience = 10
- for i in range(max_patience):
- w = random.randint(self.min_size, min(img.width, self.max_size))
- h = random.randint(self.min_size, min(img.height, self.max_size))
- region = T.RandomCrop.get_params(img, [h, w])
- result_img, result_target = crop(img, target, region)
- if (
- not self.respect_boxes
- or len(result_target["boxes"]) == init_boxes
- or i == max_patience - 1
- ):
- return result_img, result_target
- return result_img, result_target
-
-
-class CenterCrop(object):
- def __init__(self, size):
- self.size = size
-
- def __call__(self, img, target):
- image_width, image_height = img.size
- crop_height, crop_width = self.size
- crop_top = int(round((image_height - crop_height) / 2.0))
- crop_left = int(round((image_width - crop_width) / 2.0))
- return crop(img, target, (crop_top, crop_left, crop_height, crop_width))
-
-
-class RandomHorizontalFlip(object):
- def __init__(self, p=0.5):
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return hflip(img, target)
- return img, target
-
-
-class RandomResize(object):
- def __init__(self, sizes, max_size=None):
- assert isinstance(sizes, (list, tuple))
- self.sizes = sizes
- self.max_size = max_size
-
- def __call__(self, img, target=None):
- size = random.choice(self.sizes)
- return resize(img, target, size, self.max_size)
-
-
-class RandomPad(object):
- def __init__(self, max_pad):
- self.max_pad = max_pad
-
- def __call__(self, img, target):
- pad_x = random.randint(0, self.max_pad)
- pad_y = random.randint(0, self.max_pad)
- return pad(img, target, (pad_x, pad_y))
-
-
-class RandomSelect(object):
- """
- Randomly selects between transforms1 and transforms2,
- with probability p for transforms1 and (1 - p) for transforms2
- """
-
- def __init__(self, transforms1, transforms2, p=0.5):
- self.transforms1 = transforms1
- self.transforms2 = transforms2
- self.p = p
-
- def __call__(self, img, target):
- if random.random() < self.p:
- return self.transforms1(img, target)
- return self.transforms2(img, target)
-
-
-class ToTensor(object):
- def __call__(self, img, target):
- return F.to_tensor(img), target
-
-
-class RandomErasing(object):
- def __init__(self, *args, **kwargs):
- self.eraser = T.RandomErasing(*args, **kwargs)
-
- def __call__(self, img, target):
- return self.eraser(img), target
-
-
-class Normalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, image, target=None):
- image = F.normalize(image, mean=self.mean, std=self.std)
- if target is None:
- return image, None
- target = target.copy()
- h, w = image.shape[-2:]
- if "boxes" in target:
- boxes = target["boxes"]
- boxes = box_xyxy_to_cxcywh(boxes)
- boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32)
- target["boxes"] = boxes
- return image, target
-
-
-class Compose(object):
- def __init__(self, transforms):
- self.transforms = transforms
-
- def __call__(self, image, target):
- for t in self.transforms:
- image, target = t(image, target)
- return image, target
-
- def __repr__(self):
- format_string = self.__class__.__name__ + "("
- for t in self.transforms:
- format_string += "\n"
- format_string += " {0}".format(t)
- format_string += "\n)"
- return format_string
diff --git a/spaces/shi-labs/Matting-Anything/app.py b/spaces/shi-labs/Matting-Anything/app.py
deleted file mode 100644
index e0abd00909e5860193e913e05e07d7e9c0a3f248..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Matting-Anything/app.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# ------------------------------------------------------------------------
-# Modified from Grounded-SAM (https://github.com/IDEA-Research/Grounded-Segment-Anything)
-# ------------------------------------------------------------------------
-import os
-import sys
-import random
-import warnings
-
-os.system("export BUILD_WITH_CUDA=True")
-os.system("python -m pip install -e segment-anything")
-os.system("python -m pip install -e GroundingDINO")
-os.system("pip install --upgrade diffusers[torch]")
-#os.system("pip install opencv-python pycocotools matplotlib")
-sys.path.insert(0, './GroundingDINO')
-sys.path.insert(0, './segment-anything')
-warnings.filterwarnings("ignore")
-
-import cv2
-from scipy import ndimage
-
-import gradio as gr
-import argparse
-
-import numpy as np
-import torch
-from torch.nn import functional as F
-import torchvision
-import networks
-import utils
-
-# Grounding DINO
-from groundingdino.util.inference import Model
-
-# SAM
-from segment_anything.utils.transforms import ResizeLongestSide
-
-# SD
-from diffusers import StableDiffusionPipeline
-
-transform = ResizeLongestSide(1024)
-# Green Screen
-PALETTE_back = (51, 255, 146)
-
-GROUNDING_DINO_CONFIG_PATH = "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py"
-GROUNDING_DINO_CHECKPOINT_PATH = "checkpoints/groundingdino_swint_ogc.pth"
-mam_checkpoint="checkpoints/mam_sam_vitb.pth"
-output_dir="outputs"
-device = 'cuda'
-background_list = os.listdir('assets/backgrounds')
-
-# initialize MAM
-mam_model = networks.get_generator_m2m(seg='sam', m2m='sam_decoder_deep')
-mam_model.to(device)
-checkpoint = torch.load(mam_checkpoint, map_location=device)
-mam_model.load_state_dict(utils.remove_prefix_state_dict(checkpoint['state_dict']), strict=True)
-mam_model = mam_model.eval()
-
-# initialize GroundingDINO
-grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device=device)
-
-# initialize StableDiffusionPipeline
-generator = StableDiffusionPipeline.from_pretrained("checkpoints/stable-diffusion-v1-5", torch_dtype=torch.float16)
-generator.to(device)
-
-def run_grounded_sam(input_image, text_prompt, task_type, background_prompt, background_type, box_threshold, text_threshold, iou_threshold, scribble_mode, guidance_mode):
-
- # make dir
- os.makedirs(output_dir, exist_ok=True)
-
- # load image
- image_ori = input_image["image"]
- scribble = input_image["mask"]
- original_size = image_ori.shape[:2]
-
- if task_type == 'text':
- if text_prompt is None:
- print('Please input non-empty text prompt')
- with torch.no_grad():
- detections, phrases = grounding_dino_model.predict_with_caption(
- image=cv2.cvtColor(image_ori, cv2.COLOR_RGB2BGR),
- caption=text_prompt,
- box_threshold=box_threshold,
- text_threshold=text_threshold
- )
-
- if len(detections.xyxy) > 1:
- nms_idx = torchvision.ops.nms(
- torch.from_numpy(detections.xyxy),
- torch.from_numpy(detections.confidence),
- iou_threshold,
- ).numpy().tolist()
-
- detections.xyxy = detections.xyxy[nms_idx]
- detections.confidence = detections.confidence[nms_idx]
-
- bbox = detections.xyxy[np.argmax(detections.confidence)]
- bbox = transform.apply_boxes(bbox, original_size)
- bbox = torch.as_tensor(bbox, dtype=torch.float).to(device)
-
- image = transform.apply_image(image_ori)
- image = torch.as_tensor(image).to(device)
- image = image.permute(2, 0, 1).contiguous()
-
- pixel_mean = torch.tensor([123.675, 116.28, 103.53]).view(3,1,1).to(device)
- pixel_std = torch.tensor([58.395, 57.12, 57.375]).view(3,1,1).to(device)
-
- image = (image - pixel_mean) / pixel_std
-
- h, w = image.shape[-2:]
- pad_size = image.shape[-2:]
- padh = 1024 - h
- padw = 1024 - w
- image = F.pad(image, (0, padw, 0, padh))
-
- if task_type == 'scribble_point':
- scribble = scribble.transpose(2, 1, 0)[0]
- labeled_array, num_features = ndimage.label(scribble >= 255)
- centers = ndimage.center_of_mass(scribble, labeled_array, range(1, num_features+1))
- centers = np.array(centers)
- ### (x,y)
- centers = transform.apply_coords(centers, original_size)
- point_coords = torch.from_numpy(centers).to(device)
- point_coords = point_coords.unsqueeze(0).to(device)
- point_labels = torch.from_numpy(np.array([1] * len(centers))).unsqueeze(0).to(device)
- if scribble_mode == 'split':
- point_coords = point_coords.permute(1, 0, 2)
- point_labels = point_labels.permute(1, 0)
-
- sample = {'image': image.unsqueeze(0), 'point': point_coords, 'label': point_labels, 'ori_shape': original_size, 'pad_shape': pad_size}
- elif task_type == 'scribble_box':
- scribble = scribble.transpose(2, 1, 0)[0]
- labeled_array, num_features = ndimage.label(scribble >= 255)
- centers = ndimage.center_of_mass(scribble, labeled_array, range(1, num_features+1))
- centers = np.array(centers)
- ### (x1, y1, x2, y2)
- x_min = centers[:, 0].min()
- x_max = centers[:, 0].max()
- y_min = centers[:, 1].min()
- y_max = centers[:, 1].max()
- bbox = np.array([x_min, y_min, x_max, y_max])
- bbox = transform.apply_boxes(bbox, original_size)
- bbox = torch.as_tensor(bbox, dtype=torch.float).to(device)
-
- sample = {'image': image.unsqueeze(0), 'bbox': bbox.unsqueeze(0), 'ori_shape': original_size, 'pad_shape': pad_size}
- elif task_type == 'text':
- sample = {'image': image.unsqueeze(0), 'bbox': bbox.unsqueeze(0), 'ori_shape': original_size, 'pad_shape': pad_size}
- else:
- print("task_type:{} error!".format(task_type))
-
- with torch.no_grad():
- feas, pred, post_mask = mam_model.forward_inference(sample)
-
- alpha_pred_os1, alpha_pred_os4, alpha_pred_os8 = pred['alpha_os1'], pred['alpha_os4'], pred['alpha_os8']
- alpha_pred_os8 = alpha_pred_os8[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
- alpha_pred_os4 = alpha_pred_os4[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
- alpha_pred_os1 = alpha_pred_os1[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
-
- alpha_pred_os8 = F.interpolate(alpha_pred_os8, sample['ori_shape'], mode="bilinear", align_corners=False)
- alpha_pred_os4 = F.interpolate(alpha_pred_os4, sample['ori_shape'], mode="bilinear", align_corners=False)
- alpha_pred_os1 = F.interpolate(alpha_pred_os1, sample['ori_shape'], mode="bilinear", align_corners=False)
-
- if guidance_mode == 'mask':
- weight_os8 = utils.get_unknown_tensor_from_mask_oneside(post_mask, rand_width=10, train_mode=False)
- post_mask[weight_os8>0] = alpha_pred_os8[weight_os8>0]
- alpha_pred = post_mask.clone().detach()
- else:
- weight_os8 = utils.get_unknown_box_from_mask(post_mask)
- alpha_pred_os8[weight_os8>0] = post_mask[weight_os8>0]
- alpha_pred = alpha_pred_os8.clone().detach()
-
-
- weight_os4 = utils.get_unknown_tensor_from_pred_oneside(alpha_pred, rand_width=20, train_mode=False)
- alpha_pred[weight_os4>0] = alpha_pred_os4[weight_os4>0]
-
- weight_os1 = utils.get_unknown_tensor_from_pred_oneside(alpha_pred, rand_width=10, train_mode=False)
- alpha_pred[weight_os1>0] = alpha_pred_os1[weight_os1>0]
-
- alpha_pred = alpha_pred[0][0].cpu().numpy()
-
- #### draw
- ### alpha matte
- alpha_rgb = cv2.cvtColor(np.uint8(alpha_pred*255), cv2.COLOR_GRAY2RGB)
- ### com img with background
- if background_type == 'real_world_sample':
- background_img_file = os.path.join('assets/backgrounds', random.choice(background_list))
- background_img = cv2.imread(background_img_file)
- background_img = cv2.cvtColor(background_img, cv2.COLOR_BGR2RGB)
- background_img = cv2.resize(background_img, (image_ori.shape[1], image_ori.shape[0]))
- com_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.uint8(background_img)
- com_img = np.uint8(com_img)
- else:
- if background_prompt is None:
- print('Please input non-empty background prompt')
- else:
- background_img = generator(background_prompt).images[0]
- background_img = np.array(background_img)
- background_img = cv2.resize(background_img, (image_ori.shape[1], image_ori.shape[0]))
- com_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.uint8(background_img)
- com_img = np.uint8(com_img)
- ### com img with green screen
- green_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.array([PALETTE_back], dtype='uint8')
- green_img = np.uint8(green_img)
- return [(com_img, 'composite with background'), (green_img, 'green screen'), (alpha_rgb, 'alpha matte')]
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser("MAM demo", add_help=True)
- parser.add_argument("--debug", action="store_true", help="using debug mode")
- parser.add_argument("--share", action="store_true", help="share the app")
- parser.add_argument('--port', type=int, default=7589, help='port to run the server')
- parser.add_argument('--no-gradio-queue', action="store_true", help='path to the SAM checkpoint')
- args = parser.parse_args()
-
- print(args)
-
- block = gr.Blocks()
- if not args.no_gradio_queue:
- block = block.queue()
-
- with block:
- gr.Markdown(
- """
- # Matting Anything
-
- [Jiachen Li](https://chrisjuniorli.github.io/),
- [Jitesh Jain](https://praeclarumjj3.github.io/),
- [Humphrey Shi](https://www.humphreyshi.com/home)
-
- [[`Project page`](https://chrisjuniorli.github.io/project/Matting-Anything/)]
- [[`ArXiv`](https://arxiv.org/abs/2306.05399)]
- [[`Code`](https://github.com/SHI-Labs/Matting-Anything)]
- [[`Video`](https://www.youtube.com/watch?v=XY2Q0HATGOk)]
-
- Welcome to the Matting Anything demo and upload your image to get started
- You may select different prompt types to get the alpha matte of target instance, and select different backgrounds for image composition. The local setup instructions of the demo is available at: https://github.com/SHI-Labs/Matting-Anything
-
- ## Usage
- You may check the video to see how to play with the demo, or check the details below.
-
- You may upload an image to start, we support 3 prompt types to get the alpha matte of the target instance:
-
- **scribble_point**: Click an point on the target instance.
-
- **scribble_box**: Click on two points, the top-left point and the bottom-right point to represent a bounding box of the target instance.
-
- **text**: Send text prompt to identify the target instance in the `Text prompt` box.
-
- We also support 2 background types to support image composition with the alpha matte output:
-
- **real_world_sample**: Randomly select a real-world image from `assets/backgrounds` for composition.
-
- **generated_by_text**: Send background text prompt to create a background image with stable diffusion model in the `Background prompt` box.
-
- **guidance_mode**: Try mask guidance if alpha guidacne didn't return satisfying outputs
-
-
- """)
-
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy", value="assets/demo.jpg", tool="sketch")
- task_type = gr.Dropdown(["scribble_point", "scribble_box", "text"], value="text", label="Prompt type")
- text_prompt = gr.Textbox(label="Text prompt", placeholder="the girl in the middle")
- background_type = gr.Dropdown(["generated_by_text", "real_world_sample"], value="generated_by_text", label="Background type")
- background_prompt = gr.Textbox(label="Background prompt", placeholder="downtown area in New York")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- box_threshold = gr.Slider(
- label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.05
- )
- text_threshold = gr.Slider(
- label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.05
- )
- iou_threshold = gr.Slider(
- label="IOU Threshold", minimum=0.0, maximum=1.0, value=0.5, step=0.05
- )
- scribble_mode = gr.Dropdown(
- ["merge", "split"], value="split", label="scribble_mode"
- )
- guidance_mode = gr.Dropdown(
- ["mask", "alpha"], value="alpha", label="guidance_mode", info="mask guidance works better on complex scenes with multiple instances, alpha guidance works better on simple scene with human instances"
- )
-
- with gr.Column():
- gallery = gr.Gallery(
- label="Generated images", show_label=True, elem_id="gallery"
- ).style(preview=True, grid=3, object_fit="scale-down")
-
- run_button.click(fn=run_grounded_sam, inputs=[
- input_image, text_prompt, task_type, background_prompt, background_type, box_threshold, text_threshold, iou_threshold, scribble_mode, guidance_mode], outputs=gallery)
-
- gr.Markdown(
- """
- ## Examples
-
- | input image | text prompt | background type | background prompt | guidance mode | composite with background | green screen |
- |-------------|------------|--------------|--------------|--------------|--------------|--------------|
- | | the girl in the middle | real-world sample | None | alpha | | |
- | | the girl in the middle | generated by text | downtown area in chicago | alpha | | |
- | | the dog sitting the left side | generated by text | national park view | mask | | |
- | | the bigger dog sitting the right side | real-world sample | None | mask | | |
- | | the girl with red sweater | real-world sample | None | alpha | | |
- | | the girl with black sweater | generated by text | sunrise on the sea | alpha | | |
- """)
-
- block.launch(debug=args.debug, share=args.share, show_error=True)
- #block.queue(concurrency_count=100)
- #block.launch(server_name='0.0.0.0', server_port=args.port, debug=args.debug, share=args.share)
diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/__init__.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/__init__.py
deleted file mode 100644
index 5fcc1d540462712387523d1e326d1dfc2bcfbf32..0000000000000000000000000000000000000000
--- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from .file_client import FileClient
-from .img_util import crop_border, imfrombytes, img2tensor, imwrite, tensor2img
-from .logger import MessageLogger, get_env_info, get_root_logger, init_tb_logger, init_wandb_logger
-from .misc import check_resume, get_time_str, make_exp_dirs, mkdir_and_rename, scandir, set_random_seed, sizeof_fmt
-
-__all__ = [
- # file_client.py
- 'FileClient',
- # img_util.py
- 'img2tensor',
- 'tensor2img',
- 'imfrombytes',
- 'imwrite',
- 'crop_border',
- # logger.py
- 'MessageLogger',
- 'init_tb_logger',
- 'init_wandb_logger',
- 'get_root_logger',
- 'get_env_info',
- # misc.py
- 'set_random_seed',
- 'get_time_str',
- 'mkdir_and_rename',
- 'make_exp_dirs',
- 'scandir',
- 'check_resume',
- 'sizeof_fmt'
-]
diff --git a/spaces/shvuuuu/Credit_Card_Churn_Predictor/README.md b/spaces/shvuuuu/Credit_Card_Churn_Predictor/README.md
deleted file mode 100644
index 0ecebd6dc83a5711307bfe6a4669002fbde77881..0000000000000000000000000000000000000000
--- a/spaces/shvuuuu/Credit_Card_Churn_Predictor/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Credit Card Churn Predictor
-emoji: 💻
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/silentchen/layout-guidance/app.py b/spaces/silentchen/layout-guidance/app.py
deleted file mode 100644
index ff3ce799176ff273291dbb3595c3384551ba1845..0000000000000000000000000000000000000000
--- a/spaces/silentchen/layout-guidance/app.py
+++ /dev/null
@@ -1,553 +0,0 @@
-import gradio as gr
-import torch
-from transformers import CLIPTextModel, CLIPTokenizer
-from diffusers import AutoencoderKL, LMSDiscreteScheduler
-from my_model import unet_2d_condition
-import json
-import numpy as np
-from PIL import Image, ImageDraw, ImageFont
-from functools import partial
-import math
-from utils import compute_ca_loss
-from gradio import processing_utils
-from typing import Optional
-
-import warnings
-
-import sys
-
-sys.tracebacklimit = 0
-
-class Blocks(gr.Blocks):
-
- def __init__(
- self,
- theme: str = "default",
- analytics_enabled: Optional[bool] = None,
- mode: str = "blocks",
- title: str = "Gradio",
- css: Optional[str] = None,
- **kwargs,
- ):
- self.extra_configs = {
- 'thumbnail': kwargs.pop('thumbnail', ''),
- 'url': kwargs.pop('url', 'https://gradio.app/'),
- 'creator': kwargs.pop('creator', '@teamGradio'),
- }
-
- super(Blocks, self).__init__(theme, analytics_enabled, mode, title, css, **kwargs)
- warnings.filterwarnings("ignore")
-
- def get_config_file(self):
- config = super(Blocks, self).get_config_file()
-
- for k, v in self.extra_configs.items():
- config[k] = v
-
- return config
-
-
-def draw_box(boxes=[], texts=[], img=None):
- if len(boxes) == 0 and img is None:
- return None
-
- if img is None:
- img = Image.new('RGB', (512, 512), (255, 255, 255))
- colors = ["red", "olive", "blue", "green", "orange", "brown", "cyan", "purple"]
- draw = ImageDraw.Draw(img)
- font = ImageFont.truetype("DejaVuSansMono.ttf", size=18)
- print(boxes)
- for bid, box in enumerate(boxes):
- draw.rectangle([box[0], box[1], box[2], box[3]], outline=colors[bid % len(colors)], width=4)
- anno_text = texts[bid]
- draw.rectangle(
- [box[0], box[3] - int(font.size * 1.2), box[0] + int((len(anno_text) + 0.8) * font.size * 0.6), box[3]],
- outline=colors[bid % len(colors)], fill=colors[bid % len(colors)], width=4)
- draw.text([box[0] + int(font.size * 0.2), box[3] - int(font.size * 1.2)], anno_text, font=font,
- fill=(255, 255, 255))
- return img
-
-'''
-inference model
-'''
-
-def inference(device, unet, vae, tokenizer, text_encoder, prompt, bboxes, object_positions, batch_size, loss_scale, loss_threshold, max_iter, max_index_step, rand_seed, guidance_scale):
- uncond_input = tokenizer(
- [""] * 1, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt"
- )
- uncond_embeddings = text_encoder(uncond_input.input_ids.to(device))[0]
-
- input_ids = tokenizer(
- prompt,
- padding="max_length",
- truncation=True,
- max_length=tokenizer.model_max_length,
- return_tensors="pt",
- ).input_ids[0].unsqueeze(0).to(device)
- # text_embeddings = text_encoder(input_ids)[0]
- text_embeddings = torch.cat([uncond_embeddings, text_encoder(input_ids)[0]])
- # text_embeddings[1, 1, :] = text_embeddings[1, 2, :]
- generator = torch.manual_seed(rand_seed) # Seed generator to create the inital latent noise
-
- latents = torch.randn(
- (batch_size, 4, 64, 64),
- generator=generator,
- ).to(device)
-
- noise_scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
-
- # generator = torch.Generator("cuda").manual_seed(1024)
- noise_scheduler.set_timesteps(51)
-
- latents = latents * noise_scheduler.init_noise_sigma
-
- loss = torch.tensor(10000)
-
- for index, t in enumerate(noise_scheduler.timesteps):
- iteration = 0
-
- while loss.item() / loss_scale > loss_threshold and iteration < max_iter and index < max_index_step:
- latents = latents.requires_grad_(True)
-
- # latent_model_input = torch.cat([latents] * 2)
- latent_model_input = latents
-
- latent_model_input = noise_scheduler.scale_model_input(latent_model_input, t)
- noise_pred, attn_map_integrated_up, attn_map_integrated_mid, attn_map_integrated_down = \
- unet(latent_model_input, t, encoder_hidden_states=text_encoder(input_ids)[0])
-
- # update latents with guidence from gaussian blob
-
- loss = compute_ca_loss(attn_map_integrated_mid, attn_map_integrated_up, bboxes=bboxes,
- object_positions=object_positions) * loss_scale
-
- print(loss.item() / loss_scale)
-
- grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents])[0]
-
- latents = latents - grad_cond * noise_scheduler.sigmas[index] ** 2
- iteration += 1
- torch.cuda.empty_cache()
- torch.cuda.empty_cache()
-
-
- with torch.no_grad():
-
- latent_model_input = torch.cat([latents] * 2)
-
- latent_model_input = noise_scheduler.scale_model_input(latent_model_input, t)
- noise_pred, attn_map_integrated_up, attn_map_integrated_mid, attn_map_integrated_down = \
- unet(latent_model_input, t, encoder_hidden_states=text_embeddings)
-
- noise_pred = noise_pred.sample
-
- # perform classifier-free guidance
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- latents = noise_scheduler.step(noise_pred, t, latents).prev_sample
- torch.cuda.empty_cache()
- # Decode image
- with torch.no_grad():
- # print("decode image")
- latents = 1 / 0.18215 * latents
- image = vae.decode(latents).sample
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.detach().cpu().permute(0, 2, 3, 1).numpy()
- images = (image * 255).round().astype("uint8")
- pil_images = [Image.fromarray(image) for image in images]
- return pil_images
-
-def get_concat(ims):
- if len(ims) == 1:
- n_col = 1
- else:
- n_col = 2
- n_row = math.ceil(len(ims) / 2)
- dst = Image.new('RGB', (ims[0].width * n_col, ims[0].height * n_row), color="white")
- for i, im in enumerate(ims):
- row_id = i // n_col
- col_id = i % n_col
- dst.paste(im, (im.width * col_id, im.height * row_id))
- return dst
-
-
-def generate(unet, vae, tokenizer, text_encoder, language_instruction, grounding_texts, sketch_pad,
- loss_threshold, guidance_scale, batch_size, rand_seed, max_step, loss_scale, max_iter,
- state):
- if 'boxes' not in state:
- state['boxes'] = []
- boxes = state['boxes']
- grounding_texts = [x.strip() for x in grounding_texts.split(';')]
- # assert len(boxes) == len(grounding_texts)
- if len(boxes) != len(grounding_texts):
- if len(boxes) < len(grounding_texts):
- raise ValueError("""The number of boxes should be equal to the number of grounding objects.
-Number of boxes drawn: {}, number of grounding tokens: {}.
-Please draw boxes accordingly on the sketch pad.""".format(len(boxes), len(grounding_texts)))
- grounding_texts = grounding_texts + [""] * (len(boxes) - len(grounding_texts))
-
- boxes = (np.asarray(boxes) / 512).tolist()
- boxes = [[box] for box in boxes]
- grounding_instruction = json.dumps({obj: box for obj, box in zip(grounding_texts, boxes)})
- language_instruction_list = language_instruction.strip('.').split(' ')
- object_positions = []
- for obj in grounding_texts:
- obj_position = []
- for word in obj.split(' '):
- obj_first_index = language_instruction_list.index(word) + 1
- obj_position.append(obj_first_index)
- object_positions.append(obj_position)
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
- gen_images = inference(device, unet, vae, tokenizer, text_encoder, language_instruction, boxes, object_positions, batch_size, loss_scale, loss_threshold, max_iter, max_step, rand_seed, guidance_scale)
-
- blank_samples = batch_size % 2 if batch_size > 1 else 0
- gen_images = [gr.Image.update(value=x, visible=True) for i, x in enumerate(gen_images)] \
- + [gr.Image.update(value=None, visible=True) for _ in range(blank_samples)] \
- + [gr.Image.update(value=None, visible=False) for _ in range(4 - batch_size - blank_samples)]
-
- return gen_images + [state]
-
-
-def binarize(x):
- return (x != 0).astype('uint8') * 255
-
-
-def sized_center_crop(img, cropx, cropy):
- y, x = img.shape[:2]
- startx = x // 2 - (cropx // 2)
- starty = y // 2 - (cropy // 2)
- return img[starty:starty + cropy, startx:startx + cropx]
-
-
-def sized_center_fill(img, fill, cropx, cropy):
- y, x = img.shape[:2]
- startx = x // 2 - (cropx // 2)
- starty = y // 2 - (cropy // 2)
- img[starty:starty + cropy, startx:startx + cropx] = fill
- return img
-
-
-def sized_center_mask(img, cropx, cropy):
- y, x = img.shape[:2]
- startx = x // 2 - (cropx // 2)
- starty = y // 2 - (cropy // 2)
- center_region = img[starty:starty + cropy, startx:startx + cropx].copy()
- img = (img * 0.2).astype('uint8')
- img[starty:starty + cropy, startx:startx + cropx] = center_region
- return img
-
-
-def center_crop(img, HW=None, tgt_size=(512, 512)):
- if HW is None:
- H, W = img.shape[:2]
- HW = min(H, W)
- img = sized_center_crop(img, HW, HW)
- img = Image.fromarray(img)
- img = img.resize(tgt_size)
- return np.array(img)
-
-
-def draw(input, grounding_texts, new_image_trigger, state):
- if type(input) == dict:
- image = input['image']
- mask = input['mask']
- else:
- mask = input
- if mask.ndim == 3:
- mask = 255 - mask[..., 0]
-
- image_scale = 1.0
-
- mask = binarize(mask)
-
- if type(mask) != np.ndarray:
- mask = np.array(mask)
-
- if mask.sum() == 0:
- state = {}
-
- image = None
-
- if 'boxes' not in state:
- state['boxes'] = []
-
- if 'masks' not in state or len(state['masks']) == 0:
- state['masks'] = []
- last_mask = np.zeros_like(mask)
- else:
- last_mask = state['masks'][-1]
-
- if type(mask) == np.ndarray and mask.size > 1:
- diff_mask = mask - last_mask
- else:
- diff_mask = np.zeros([])
-
- if diff_mask.sum() > 0:
- x1x2 = np.where(diff_mask.max(0) != 0)[0]
- y1y2 = np.where(diff_mask.max(1) != 0)[0]
- y1, y2 = y1y2.min(), y1y2.max()
- x1, x2 = x1x2.min(), x1x2.max()
-
- if (x2 - x1 > 5) and (y2 - y1 > 5):
- state['masks'].append(mask.copy())
- state['boxes'].append((x1, y1, x2, y2))
-
- grounding_texts = [x.strip() for x in grounding_texts.split(';')]
- grounding_texts = [x for x in grounding_texts if len(x) > 0]
- if len(grounding_texts) < len(state['boxes']):
- grounding_texts += [f'Obj. {bid + 1}' for bid in range(len(grounding_texts), len(state['boxes']))]
- box_image = draw_box(state['boxes'], grounding_texts, image)
-
- return [box_image, new_image_trigger, image_scale, state]
-
-
-def clear(task, sketch_pad_trigger, batch_size, state, switch_task=False):
- if task != 'Grounded Inpainting':
- sketch_pad_trigger = sketch_pad_trigger + 1
- blank_samples = batch_size % 2 if batch_size > 1 else 0
- out_images = [gr.Image.update(value=None, visible=True) for i in range(batch_size)]
- # state = {}
- return [None, sketch_pad_trigger, None, 1.0] + out_images + [{}]
-
-
-def main():
-
- css = """
- #img2img_image, #img2img_image > .fixed-height, #img2img_image > .fixed-height > div, #img2img_image > .fixed-height > div > img
- {
- height: var(--height) !important;
- max-height: var(--height) !important;
- min-height: var(--height) !important;
- }
- #paper-info a {
- color:#008AD7;
- text-decoration: none;
- }
- #paper-info a:hover {
- cursor: pointer;
- text-decoration: none;
- }
-
- .tooltip {
- color: #555;
- position: relative;
- display: inline-block;
- cursor: pointer;
- }
-
- .tooltip .tooltiptext {
- visibility: hidden;
- width: 400px;
- background-color: #555;
- color: #fff;
- text-align: center;
- padding: 5px;
- border-radius: 5px;
- position: absolute;
- z-index: 1; /* Set z-index to 1 */
- left: 10px;
- top: 100%;
- opacity: 0;
- transition: opacity 0.3s;
- }
-
- .tooltip:hover .tooltiptext {
- visibility: visible;
- opacity: 1;
- z-index: 9999; /* Set a high z-index value when hovering */
- }
-
-
- """
-
- rescale_js = """
- function(x) {
- const root = document.querySelector('gradio-app').shadowRoot || document.querySelector('gradio-app');
- let image_scale = parseFloat(root.querySelector('#image_scale input').value) || 1.0;
- const image_width = root.querySelector('#img2img_image').clientWidth;
- const target_height = parseInt(image_width * image_scale);
- document.body.style.setProperty('--height', `${target_height}px`);
- root.querySelectorAll('button.justify-center.rounded')[0].style.display='none';
- root.querySelectorAll('button.justify-center.rounded')[1].style.display='none';
- return x;
- }
- """
- with open('./conf/unet/config.json') as f:
- unet_config = json.load(f)
-
- unet = unet_2d_condition.UNet2DConditionModel(**unet_config).from_pretrained('runwayml/stable-diffusion-v1-5',
- subfolder="unet")
- tokenizer = CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="tokenizer")
- text_encoder = CLIPTextModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="vae")
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- unet.to(device)
- text_encoder.to(device)
- vae.to(device)
-
- with Blocks(
- css=css,
- analytics_enabled=False,
- title="Layout-Guidance demo",
- ) as demo:
- description = """
- Layout Guidance
-
-
- [Project Page ]
- [Paper ]
- [GitHub ]
-
-
- """
- gr.HTML(description)
- with gr.Column():
- language_instruction = gr.Textbox(
- label="Text Prompt",
- )
- grounding_instruction = gr.Textbox(
- label="Grounding instruction (Separated by semicolon)",
- )
- sketch_pad_trigger = gr.Number(value=0, visible=False)
- sketch_pad_resize_trigger = gr.Number(value=0, visible=False)
- init_white_trigger = gr.Number(value=0, visible=False)
- image_scale = gr.Number(value=0, elem_id="image_scale", visible=False)
- new_image_trigger = gr.Number(value=0, visible=False)
-
-
- with gr.Row():
- sketch_pad = gr.Paint(label="Sketch Pad", elem_id="img2img_image", source='canvas', shape=(512, 512))
- out_imagebox = gr.Image(type="pil", label="Parsed Sketch Pad")
- out_gen_1 = gr.Image(type="pil", visible=True, label="Generated Image")
-
- with gr.Row():
- clear_btn = gr.Button(value='Clear')
- gen_btn = gr.Button(value='Generate')
-
- with gr.Accordion("Advanced Options", open=False):
- with gr.Column():
- description = """Loss Scale Factor ⓘ
- The scale factor of the backward guidance loss. The larger it is, the better control we get while it sometimes losses fidelity.
-
- Guidance Scale ⓘ
- The scale factor of classifier-free guidance.
-
- Max Iteration per Step ⓘ
- The max iterations of backward guidance in each diffusion inference process.
-
- Loss Threshold ⓘ
- The threshold of loss. If the loss computed by cross-attention map is smaller then the threshold, the backward guidance is stopped.
-
- Max Step of Backward Guidance ⓘ
- The max steps of backward guidance in diffusion inference process.
-
- """
- gr.HTML(description)
- Loss_scale = gr.Slider(minimum=0, maximum=500, step=5, value=30,label="Loss Scale Factor")
- guidance_scale = gr.Slider(minimum=0, maximum=50, step=0.5, value=7.5, label="Guidance Scale")
- batch_size = gr.Slider(minimum=1, maximum=4, step=1, value=1, label="Number of Samples", visible=False)
- max_iter = gr.Slider(minimum=0, maximum=10, step=1, value=5, label="Max Iteration per Step")
- loss_threshold = gr.Slider(minimum=0, maximum=1, step=0.1, value=0.2, label="Loss Threshold")
- max_step = gr.Slider(minimum=0, maximum=50, step=1, value=10, label="Max Step of Backward Guidance")
- rand_seed = gr.Slider(minimum=0, maximum=1000, step=1, value=445, label="Random Seed")
-
- state = gr.State({})
-
-
- class Controller:
- def __init__(self):
- self.calls = 0
- self.tracks = 0
- self.resizes = 0
- self.scales = 0
-
- def init_white(self, init_white_trigger):
- self.calls += 1
- return np.ones((512, 512), dtype='uint8') * 255, 1.0, init_white_trigger + 1
-
- def change_n_samples(self, n_samples):
- blank_samples = n_samples % 2 if n_samples > 1 else 0
- return [gr.Image.update(visible=True) for _ in range(n_samples + blank_samples)] \
- + [gr.Image.update(visible=False) for _ in range(4 - n_samples - blank_samples)]
-
-
- controller = Controller()
- demo.load(
- lambda x: x + 1,
- inputs=sketch_pad_trigger,
- outputs=sketch_pad_trigger,
- queue=False)
- sketch_pad.edit(
- draw,
- inputs=[sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state],
- outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state],
- queue=False,
- )
- grounding_instruction.change(
- draw,
- inputs=[sketch_pad, grounding_instruction, sketch_pad_resize_trigger, state],
- outputs=[out_imagebox, sketch_pad_resize_trigger, image_scale, state],
- queue=False,
- )
- clear_btn.click(
- clear,
- inputs=[sketch_pad_trigger, sketch_pad_trigger, batch_size, state],
- outputs=[sketch_pad, sketch_pad_trigger, out_imagebox, image_scale, out_gen_1, state],
- queue=False)
-
- sketch_pad_trigger.change(
- controller.init_white,
- inputs=[init_white_trigger],
- outputs=[sketch_pad, image_scale, init_white_trigger],
- queue=False)
-
- gen_btn.click(
- fn=partial(generate, unet, vae, tokenizer, text_encoder),
- inputs=[
- language_instruction, grounding_instruction, sketch_pad,
- loss_threshold, guidance_scale, batch_size, rand_seed,
- max_step,
- Loss_scale, max_iter,
- state,
- ],
- outputs=[out_gen_1, state],
- queue=True
- )
- sketch_pad_resize_trigger.change(
- None,
- None,
- sketch_pad_resize_trigger,
- _js=rescale_js,
- queue=False)
- init_white_trigger.change(
- None,
- None,
- init_white_trigger,
- _js=rescale_js,
- queue=False)
-
- with gr.Column():
- gr.Examples(
- examples=[
- [
- # "images/input.png",
- "A hello kitty toy is playing with a purple ball.",
- "hello kitty;ball",
- "images/hello_kitty_results.png"
- ],
- ],
- inputs=[language_instruction, grounding_instruction, out_gen_1],
- outputs=None,
- fn=None,
- cache_examples=False,
- )
- description = """ The source codes of the demo are modified based on the GlIGen . Thanks!
"""
- gr.HTML(description)
-
- demo.queue(concurrency_count=1, api_open=False)
- demo.launch(share=False, show_api=False, show_error=True)
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cube Solver The Ultimate Free Tool for Cube Lovers.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cube Solver The Ultimate Free Tool for Cube Lovers.md
deleted file mode 100644
index 7aa62ddfdc0ff38a5269d5f1ec7a2f7687f18156..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cube Solver The Ultimate Free Tool for Cube Lovers.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-Cube Solver Free Download: How to Solve a Rubik's Cube in Minutes
-Have you ever wondered how to solve a Rubik's cube? Have you ever been frustrated by the seemingly impossible puzzle? Have you ever wished there was an easy way to solve it without spending hours or days on it?
-If you answered yes to any of these questions, then this article is for you. In this article, we will introduce you to the concept of a cube solver , which is an app or a program that can help you solve a Rubik's cube or other similar puzzles in minutes. We will also tell you why you should download a cube solver, how to download a cube solver, and how to use a cube solver. We will also provide you with some links and QR codes to download the best cube solvers available online for free. Finally, we will answer some of the most frequently asked questions about cube solvers and Rubik's cubes. So, let's get started!
-cube solver free download Download File > https://ssurll.com/2uNSED
- What is a cube solver?
-A cube solver is an app or a program that can help you solve a Rubik's cube or other similar puzzles. A cube solver can generate the shortest possible solution, guide you through the steps, or teach you the methods and algorithms. A cube solver can also have other features, such as a virtual cube, a timer, a pattern solver, and a statistics tracker.
-A cube solver works by using a mathematical model of the cube and applying various algorithms to find the optimal solution. A cube solver can also use artificial intelligence or machine learning to improve its performance and accuracy.
-A cube solver can be used for different purposes, such as:
-
-Solving a scrambled cube that you don't know how to solve.
-Learning how to solve a cube from scratch or improving your skills and speed.
-Exploring different types of cubes and puzzles, such as 2x2, 4x4, Pyraminx, Skewb, etc.
-Creating and solving custom patterns and challenges on the cube.
-Competing with other cubers online or offline.
-
- Why should you download a cube solver?
-A cube solver can help you solve a Rubik's cube faster and easier, especially if you are a beginner or stuck on a scrambled cube. A cube solver can also help you improve your memory, reflexes, problem-solving skills, patience, and concentration. A cube solver can also make solving a Rubik's cube more fun and challenging, as you can try different puzzles, modes, and goals.
-Some of the benefits of downloading a cube solver are:
-
-You can solve any cube in minutes or even seconds with the help of a cube solver.
-You can learn the basics or advanced techniques of solving a cube with the guidance of a cube solver.
-You can practice and master different methods and algorithms with the feedback of a cube solver.
-You can enjoy solving different types of cubes and puzzles with the variety of a cube solver.
-You can track your progress and performance with the statistics of a cube solver.
-
- How to download a cube solver?
-There are many cube solvers available online for free download, such as Cube Solver, CubeX, and Rubik's Solver. You can download a cube solver from the Google Play Store or the App Store for your mobile device, or from the official website for your computer. You can also scan the QR code or click on the link provided in this article to download the cube solver of your choice.
-Here are some of the best cube solvers that you can download for free:
-cube solver app free download
-cube solver online free download
-cube solver software free download
-cube solver camera free download
-cube solver algorithm free download
-cube solver 3x3 free download
-cube solver 4x4 free download
-cube solver 2x2 free download
-cube solver apk free download
-cube solver pc free download
-cube solver windows free download
-cube solver mac free download
-cube solver android free download
-cube solver ios free download
-cube solver rubik's free download
-cube solver kstar free download
-cube solver cubex free download
-cube solver official free download
-cube solver fridrich free download
-cube solver advanced free download
-cube solver pattern free download
-cube solver virtual free download
-cube solver timer free download
-cube solver 3d free download
-cube solver step by step free download
-cube solver guide free download
-cube solver tutorial free download
-cube solver instructions free download
-cube solver tips free download
-cube solver tricks free download
-cube solver easy free download
-cube solver fast free download
-cube solver best free download
-cube solver simple free download
-cube solver smart free download
-cube solver fun free download
-cube solver cool free download
-cube solver awesome free download
-cube solver amazing free download
-cube solver super free download
-cube solver pro free download
-cube solver premium free download
-cube solver ultimate free download
-cube solver deluxe free download
-cube solver master free download
-cube solver expert free download
-cube solver genius free download
-cube solver magic free download
-cube solver challenge free download
-
-Cube Solver CubeX Rubik's Solver
-
-A simple and easy-to-use app that can solve any 3x3 Rubik's cube in less than 20 moves. It also has a virtual cube, a timer, and a pattern solver. A powerful and versatile app that can solve any cube from 2x2 to 10x10, as well as Pyraminx, Skewb, Megaminx, and more. It also has a 3D virtual cube, a timer, a statistics tracker, and a tutorial mode. An official app from the Rubik's brand that can solve any 3x3 Rubik's cube in an easy and interactive way. It also has a virtual cube, a timer, a pattern solver, and a learn mode.
-Download for Android Download for Android Download for Android
-Download for iOS Download for iOS Download for iOS
-Scan the QR code: Scan the QR code: Scan the QR code:
-
- How to use a cube solver?
-To use a cube solver, you need to input the state of your cube manually or by scanning it with the camera. Then, you can choose the solving mechanism, such as the Fridrich method or the advanced solver, and follow the instructions or animations on the screen. You can also use the virtual cube to practice, learn, create patterns, or apply algorithms on your own.
-Here are some of the steps to use a cube solver:
-
-Download and install the cube solver of your choice from the links or QR codes provided above.
-Launch the cube solver and select the type of cube or puzzle you want to solve.
-Input the state of your cube by tapping or dragging the colors on the virtual cube, or by scanning your real cube with the camera.
-Choose the solving method or mode you want to use, such as beginner, intermediate, advanced, or custom.
-Follow the instructions or animations on the screen to solve your cube step by step.
-Check your solution and time on the screen and compare it with other cubers or your previous records.
-Use the virtual cube to practice, learn, create patterns, or apply algorithms on your own.
-
- Conclusion
-A cube solver is a useful and entertaining tool that can help you solve a Rubik's cube in minutes. A cube solver can also benefit your brain and mental skills in various ways. A cube solver is easy to download and use for any device and any puzzle.
-If you want to download a cube solver and start solving a Rubik's cube in minutes, you can choose from the options we have provided in this article. You can also explore other cube solvers online and find the one that suits your needs and preferences.
-We hope you enjoyed this article and learned something new about cube solvers and Rubik's cubes. If you have any questions or comments, feel free to leave them below. Happy cubing!
- FAQs
-What is the world record for solving a Rubik's cube?
-The current world record for solving a Rubik's cube is 3.47 seconds, set by Yusheng Du of China in 2018.
-How many moves does it take to solve a Rubik's cube?
-The minimum number of moves required to solve any valid state of a Rubik's cube is 20, which is known as God's number.
-How many combinations are possible on a Rubik's cube?
-There are 43 quintillion (43 x 10^18) possible combinations on a Rubik's cube.
-What are some of the best cube solvers available online?
-Some of the best cube solvers available online are Cube Solver, CubeX, and Rubik's Solver.
-How can I learn to solve a Rubik's cube without using a cube solver?
-You can learn to solve a Rubik's cube without using a cube solver by following some online tutorials, such as How to Solve a Rubik’s Cube in 20 Moves, How to solve the Rubik's Cube - Beginners Method, or How to Solve a Rubik's Cube (with Pictures). 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Coolest New Ringtone Music for Your Phone in Minutes.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Coolest New Ringtone Music for Your Phone in Minutes.md
deleted file mode 100644
index 52ceaeae52a56c03c4c96a06d30478fa59f59808..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download the Coolest New Ringtone Music for Your Phone in Minutes.md
+++ /dev/null
@@ -1,189 +0,0 @@
-
-New Ringtone Download Music: How to Get the Best Sounds for Your Phone
-Do you want to personalize your phone with the latest and coolest ringtones? Do you want to impress your friends and family with your unique and catchy sounds? Do you want to express your mood and style with your phone's ringtone? If you answered yes to any of these questions, then this article is for you.
-Introduction
-What are ringtones and why do you need them?
-A ringtone is a sound that plays when your phone receives a call, a text message, or a notification. Ringtones can be music, sound effects, voice recordings, or anything else that you can hear. Ringtones are important because they help you identify who is calling or texting you, and they also make your phone more fun and enjoyable.
-new ringtone download music DOWNLOAD ★ https://ssurll.com/2uNZwz
-How to download ringtones to your phone?
-To download a ringtone, you can use free ringtone websites, music streaming services, or ringtone maker apps. Some websites require you to transfer the ringtone file from your computer to your phone, while others send you a download link to your phone number. Some music streaming services allow you to set any song as your ringtone, while others have a dedicated section for ringtones. Some ringtone maker apps let you create your own ringtones from your music library, while others have a collection of ready-made ringtones for you to choose from.
-What are the best sources for new ringtone download music?
-In this article, we will review some of the best sources for new ringtone download music, including free ringtone websites, music streaming services, and ringtone maker apps. We will compare their features, advantages, and disadvantages, and give you some examples of popular ringtones from each source. By the end of this article, you will be able to find the perfect ringtone for your phone.
-Body
-Free ringtone websites
-Free ringtone websites are online platforms that offer thousands of ringtones for free. You can browse through different categories, genres, artists, or themes, and listen to the previews before downloading. You can also search for specific ringtones by keywords or phrases. Here are some of the best free ringtone websites:
-Tones7.com
-Tones7.com is a website that offers over 50,000 ringtones in various formats, such as mp3, m4r, ogg, wav, and amr. You can download any ringtone without registration or sign-up. You can also upload your own ringtones and share them with other users. Some of the most popular ringtones on Tones7.com are:
-
-iPhone Ringtone
-Bella Ciao
-Old Telephone
-Cool
-Super Mario
-
-ToneTweet.com
-ToneTweet.com is a website that offers over 25,000 ringtones in mp3 format. You can download any ringtone without registration or sign-up. You can also request a custom ringtone by filling out a form. Some of the most popular ringtones on ToneTweet.com are:
-
-Despacito
-Shape of You
-Closer Let Me Love You
-Game of Thrones Theme
-
-Zedge.net
-Zedge.net is a website that offers over 10 million ringtones, wallpapers, stickers, and icons for free. You can download any ringtone without registration or sign-up. You can also create your own account and upload your own ringtones and wallpapers. Some of the most popular ringtones on Zedge.net are:
-
-Harry Potter Theme
-Pirates of the Caribbean Theme
-Funny Laugh
-Minions Banana
-Star Wars Theme
-
-Music streaming services
-Music streaming services are online platforms that offer millions of songs and podcasts for a monthly or yearly subscription fee. You can listen to any song or podcast on demand, create your own playlists, and discover new music. Some music streaming services also allow you to set any song as your ringtone, or have a dedicated section for ringtones. Here are some of the best music streaming services:
-new ringtone download music mp3
-new ringtone download music free
-new ringtone download music 2023
-new ringtone download music hindi
-new ringtone download music tamil
-new ringtone download music punjabi
-new ringtone download music english
-new ringtone download music telugu
-new ringtone download music bollywood
-new ringtone download music marathi
-new ringtone download music malayalam
-new ringtone download music kannada
-new ringtone download music gujarati
-new ringtone download music bengali
-new ringtone download music urdu
-new ringtone download music arabic
-new ringtone download music instrumental
-new ringtone download music remix
-new ringtone download music dj
-new ringtone download music rap
-new ringtone download music rock
-new ringtone download music pop
-new ringtone download music jazz
-new ringtone download music classical
-new ringtone download music country
-new ringtone download music reggae
-new ringtone download music edm
-new ringtone download music hip hop
-new ringtone download music r&b
-new ringtone download music soul
-new ringtone download music blues
-new ringtone download music metal
-new ringtone download music folk
-new ringtone download music gospel
-new ringtone download music indie
-new ringtone download music alternative
-new ringtone download music electronic
-new ringtone download music dancehall
-new ringtone download music afrobeat
-new ringtone download music kpop
-new ringtone download music jpop
-new ringtone download music cpop
-new ringtone download music latin
-new ringtone download music salsa
-new ringtone download music reggaeton
-new ringtone download music bachata
-new ringtone download music merengue
-new ringtone download music samba
-new ringtone download music tango
-Spotify
-Spotify is one of the most popular music streaming services in the world, with over 350 million users and 70 million songs. You can listen to music for free with ads, or upgrade to Spotify Premium for ad-free listening, offline mode, and better sound quality. You can also set any song as your ringtone by using a third-party app called SpotyTube. Some of the most popular songs on Spotify are:
-
-Drivers License by Olivia Rodrigo
-Blinding Lights by The Weeknd
-Mood by 24kGoldn feat. iann dior
-Levitating by Dua Lipa feat. DaBaby
-Dynamite by BTS
-
-Apple Music
-Apple Music is a music streaming service that offers over 75 million songs and podcasts for a monthly or yearly subscription fee. You can listen to music on any Apple device, as well as on Android, Windows, and web browsers. You can also set any song as your ringtone by using a built-in feature called GarageBand. Some of the most popular songs on Apple Music are:
-
-Save Your Tears by The Weeknd
-Good Days by SZA
-Positions by Ariana Grande
-Holy by Justin Bieber feat. Chance the Rapper
-Lemonade by Internet Money feat. Don Toliver, Gunna & NAV
- YouTube Music
-YouTube Music is a music streaming service that offers over 60 million songs and videos for a monthly or yearly subscription fee. You can listen to music on any device, as well as watch music videos, live performances, and covers. You can also set any song as your ringtone by using a third-party app called Ringtone Maker. Some of the most popular songs on YouTube Music are:
-
-WAP by Cardi B feat. Megan Thee Stallion
-Bad Bunny x Jhay Cortez - Dákiti
-Life Goes On by BTS
-Therefore I Am by Billie Eilish
-Laugh Now Cry Later by Drake feat. Lil Durk
-
-Ringtone maker apps
-Ringtone maker apps are mobile applications that let you create your own ringtones from your music library, voice recordings, or sound effects. You can edit, trim, fade, loop, and mix your sounds, and save them as ringtones, alarms, or notifications. You can also share your ringtones with other users or download ringtones from other users. Here are some of the best ringtone maker apps:
-Ringtone Maker - create free ringtones from music
-Ringtone Maker is a free app for Android that lets you create ringtones from any audio file on your device. You can cut the best part of your song, adjust the volume, add fade in and fade out effects, and preview the result before saving. You can also assign the ringtone to a specific contact or set it as your default ringtone. Some of the most popular ringtones created by Ringtone Maker are:
-
-I'm Yours by Jason Mraz
-Counting Stars by OneRepublic
-Rude by MAGIC!
-All of Me by John Legend
-Happy by Pharrell Williams
-
-MP3 Cutter and Ringtone Maker
-MP3 Cutter and Ringtone Maker is a free app for Android that lets you create ringtones from any audio file on your device. You can cut the best part of your song, adjust the volume, add fade in and fade out effects, and preview the result before saving. You can also record your own voice or sound and make it into a ringtone. Some of the most popular ringtones created by MP3 Cutter and Ringtone Maker are:
-
-See You Again by Wiz Khalifa feat. Charlie Puth
-Let It Go by Idina Menzel
-Royals by Lorde
-Titanium by David Guetta feat. Sia
-Roar by Katy Perry
- Ringtone Designer 2.0
-Ringtone Designer 2.0 is a free app for iOS that lets you create ringtones from any song in your iTunes library. You can select the start and end points of your ringtone, adjust the volume, and preview the result before saving. You can also sync your ringtones with your iTunes account and transfer them to your computer. Some of the most popular ringtones created by Ringtone Designer 2.0 are:
-
-Someone Like You by Adele
-Firework by Katy Perry
-Rolling in the Deep by Adele
-Party Rock Anthem by LMFAO
-Call Me Maybe by Carly Rae Jepsen
-
-Conclusion
-Summary of the main points
-In this article, we have discussed how to get the best sounds for your phone with new ringtone download music. We have reviewed some of the best sources for new ringtone download music, including free ringtone websites, music streaming services, and ringtone maker apps. We have compared their features, advantages, and disadvantages, and given you some examples of popular ringtones from each source.
-Call to action
-Now that you have learned how to get the best sounds for your phone with new ringtone download music, it's time to take action. Choose your favorite source from the ones we have mentioned, and start downloading or creating your own ringtones. You can also try different sources and mix and match your ringtones for different occasions and contacts. Have fun and enjoy your new ringtones!
- Frequently Asked Questions
-Here are some of the most common questions that people ask about new ringtone download music:
-Q: How do I set a ringtone on my phone?
-A: The exact steps may vary depending on your phone model and operating system, but generally, you can follow these steps:
-
-Download or create your ringtone and save it on your phone.
-Go to Settings > Sound > Phone ringtone.
-Browse through your ringtones and select the one you want.
-Tap OK or Save to confirm your choice.
-
-Q: How do I change the ringtone for a specific contact?
-A: The exact steps may vary depending on your phone model and operating system, but generally, you can follow these steps:
-
-Open your Contacts app and select the contact you want to change.
-Tap Edit or the pencil icon.
-Tap Ringtone or Sound.
-Browse through your ringtones and select the one you want.
-Tap OK or Save to confirm your choice.
-
-Q: How do I make my own ringtone from a song?
-A: You can use a ringtone maker app to create your own ringtone from a song. Some of the best ringtone maker apps are Ringtone Maker - create free ringtones from music, MP3 Cutter and Ringtone Maker, and Ringtone Designer 2.0. You can download them from Google Play Store or App Store, depending on your device. To create your own ringtone from a song, you can follow these steps:
-
-Open the ringtone maker app and select the song you want to use.
-Cut the best part of the song by dragging the start and end markers.
-Adjust the volume, add fade in and fade out effects, and preview the result.
-Save the ringtone and set it as your default or contact ringtone.
- Q: How do I find new ringtone download music?
-A: You can find new ringtone download music from various sources, such as free ringtone websites, music streaming services, and ringtone maker apps. Some of the best sources for new ringtone download music are Tones7.com, ToneTweet.com, Zedge.net, Spotify, Apple Music, YouTube Music, Ringtone Maker - create free ringtones from music, MP3 Cutter and Ringtone Maker, and Ringtone Designer 2.0. You can visit their websites or download their apps, depending on your device. You can also search for new ringtone download music by keywords or phrases on Google or Bing.
-Q: How do I delete a ringtone from my phone?
-A: The exact steps may vary depending on your phone model and operating system, but generally, you can follow these steps:
-
-Go to Settings > Sound > Phone ringtone.
-Browse through your ringtones and select the one you want to delete.
-Tap Delete or the trash icon.
-Tap OK or Confirm to delete the ringtone.
-
-I hope you enjoyed this article and learned something new about new ringtone download music. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a great day!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Thrill of Cars Fast as Lightning APK Build Your Own Radiator Springs.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Thrill of Cars Fast as Lightning APK Build Your Own Radiator Springs.md
deleted file mode 100644
index 528c253f25275b70c9e6297203244f985549d6e4..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Experience the Thrill of Cars Fast as Lightning APK Build Your Own Radiator Springs.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-Download APK Cars Fast as Lightning: A Fun and Colorful Racing Game for Kids
- If you are a fan of the Disney Pixar Cars movies, or if you have kids who love them, you might want to check out Cars Fast as Lightning, a free-to-play racing game for Android devices. In this game, you can drive as your favorite Cars characters, build your own version of Radiator Springs, and race against other cars in exciting and challenging tracks. Here is everything you need to know about this game, including how to download it on your PC, what features it offers, what tips and tricks you can use, and what reviews it has received.
-download apk cars fast as lightning Download === https://ssurll.com/2uNSj3
- What is Cars Fast as Lightning?
- Cars Fast as Lightning is a game developed by Gameloft, based on the popular animated movies by Disney Pixar. It is a game that combines racing and city-building elements, with a lot of charm and humor.
- A game based on the Disney Pixar Cars movies
- In this game, you can relive the adventures of Lightning McQueen, Mater, and their friends, as they host a racing extravaganza in Radiator Springs. You can play as 20 different Cars characters, each with their own personality and voice acting. You can also watch high-quality animated cutscenes that feature scenes from the movies and original stories.
- A mix of racing and city-building
- The game has two main modes: racing and building. In racing mode, you can compete against other cars in various tracks that you can customize with stunts and decorations. You can also upgrade your cars with new paint jobs and performance boosts. In building mode, you can create your own version of Radiator Springs, with over 30 interactive buildings and landmarks from the movies. You can also collect coins and gems from your buildings, which you can use to buy more items for your town and your cars.
- A game with simple and intuitive controls
- The game is designed to be easy and fun for kids of all ages. The racing controls are very simple: you just have to tap and hold the screen to accelerate, and release it to slow down. You also have to tap the screen at the right time when you encounter blue circles on the track, which will give you a speed boost or let you perform a trick. The building controls are also straightforward: you just have to drag and drop items on the map, and tap them to interact with them.
-download cars fast as lightning game apk
-cars fast as lightning apk free download for android
-how to download cars fast as lightning apk on pc
-cars fast as lightning mod apk download unlimited money
-download cars fast as lightning apk latest version 2023
-cars fast as lightning apk download for android 4.4
-download cars fast as lightning apk from apkcombo
-cars fast as lightning apk download bluestacks emulator
-cars fast as lightning hack apk download no root
-download cars fast as lightning apk offline installer
-cars fast as lightning apk download for android tv
-download cars fast as lightning apk + obb data
-cars fast as lightning apk download gameloft se
-download cars fast as lightning apk for windows 10
-cars fast as lightning apk download for tablet
-download cars fast as lightning apk pure original
-cars fast as lightning apk download uptodown app store
-download cars fast as lightning apk mod menu
-cars fast as lightning apk download rexdl site
-download cars fast as lightning apk full unlocked
-cars fast as lightning apk download for android 11
-download cars fast as lightning apk mirror link
-cars fast as lightning apk download apkpure website
-download cars fast as lightning apk cracked version
-cars fast as lightning apk download for android 5.0
-download cars fast as lightning apk file free
-cars fast as lightning apk download for pc windows 7
-download cars fast as lightning apk hack tool
-cars fast as lightning apk download for android 9.0
-download cars fast as lightning apk old version 1.3.4d
-cars fast as lightning apk download for android 6.0
-download cars fast as lightning apk new update 2023
-cars fast as lightning apk download for android 8.0
-download cars fast as lightning apk cheats codes
-cars fast as lightning apk download for android 7.0
-download cars fast as lightning apk online play
-cars fast as lightning apk download for android 10.0
-download cars fast as lightning apk no ads premium
-cars fast as lightning apk download for android 12.0
-download cars fast as lightning apk unlimited gems coins
- How to Download and Play Cars Fast as Lightning on PC?
- If you want to enjoy this game on a bigger screen, with better graphics and sound, you can download it on your PC using an emulator. An emulator is a software that lets you run Android apps on your computer. Here are the steps to download and play Cars Fast as Lightning on PC using BlueStacks, one of the most popular emulators:
- Download and install BlueStacks on your PC
- You can download BlueStacks from its official website [here](^1^). After downloading the file, run it to install Blue Stacks on your PC. Follow the instructions on the screen to complete the installation.
- Search for Cars Fast as Lightning in the emulator's app store
- After installing BlueStacks, launch it and sign in with your Google account. Then, go to the app store and search for Cars Fast as Lightning. You can also use this [link] to go directly to the game's page.
- Install and launch the game
- Once you find the game, click on the install button and wait for it to download. After the installation is done, you can click on the open button to launch the game. You can also find the game's icon on your desktop or in the emulator's home screen. Now, you can enjoy playing Cars Fast as Lightning on your PC.
- What are the Features of Cars Fast as Lightning?
- Cars Fast as Lightning is a game that offers a lot of fun and entertainment for kids and adults alike. Here are some of the features that make this game stand out:
- Drive as your favorite Cars characters
- You can choose from 20 different Cars characters, each with their own unique design, voice, and personality. You can drive as Lightning McQueen, Mater, Sally, Doc, Ramone, Flo, and many more. You can also unlock new characters as you progress through the game, such as Francesco Bernoulli, Finn McMissile, and Holley Shiftwell.
- Customize your cars and tracks
- You can make your cars and tracks look more awesome by customizing them with various items and decorations. You can change the color and paint job of your cars, and add stickers, spoilers, wheels, and other accessories. You can also modify your tracks by adding ramps, loops, jumps, tunnels, and other stunts. You can even create your own tracks by drawing them on the screen.
- Watch animated cutscenes with voice acting
- The game features high-quality animated cutscenes that tell the story of the racing extravaganza in Radiator Springs. You can watch scenes from the movies, as well as original stories that feature your favorite Cars characters. The cutscenes are also voiced by the original actors from the movies, such as Owen Wilson, Larry the Cable Guy, Bonnie Hunt, and Cheech Marin.
- What are the Tips and Tricks for Cars Fast as Lightning?
- If you want to master this game and win every race, you might want to follow these tips and tricks:
- Ease off the gas pedal for turns
- One of the most important skills in this game is knowing how to handle turns. If you go too fast on a turn, you might lose control of your car and crash into a wall or another car. To avoid this, you should ease off the gas pedal when you approach a turn, and then accelerate again when you exit it. This will help you maintain your speed and balance.
- Tap the screen at the right time for boosts and tricks
- Another key skill in this game is knowing how to use boosts and tricks. Boosts are blue circles that appear on the track, which will give you a speed boost if you tap the screen when you reach them. Tricks are yellow circles that appear on ramps or loops, which will let you perform a trick if you tap the screen when you reach them. Both boosts and tricks will fill up your turbo meter, which you can use to activate a turbo boost by swiping up on the screen. Turbo boosts will make your car go faster for a short time.
- Upgrade your cars and buildings regularly
- A final tip for this game is to upgrade your cars and buildings regularly. Upgrading your cars will improve their performance and appearance, making them faster and more stylish. Upgrading your buildings will increase their income and attractiveness, making them generate more coins and gems for you. You can upgrade your cars and buildings by spending coins or gems, which you can earn by racing or collecting from your town.
- What are the Reviews of Cars Fast as Lightning?
- Cars Fast as Lightning is a game that has received mixed reviews from critics and players alike. Here are some of the reviews that summarize its pros and cons:
- A positive review from Gamezebo
- "Cars Fast as Lightning is a charming racing game that delivers plenty of fun for fans of Pixar's Cars franchise."
A negative review from GBHBL
- "Cars: Fast as Lightning is not a terrible free-to-play game compared to some of the others out there. It has a nice idea mixing the town building with races but when you peel back the cover it’s still a bare bones game asking you pay money to have more races."
- A mixed review from 148Apps
- "Cars: Fast as Lightning is a game that offers a lot of fun and entertainment for kids and adults alike. Here are some of the features that make this game stand out: [...] Though its races are strictly kids' stuff, Cars: Fast as Lightning successfully pays tribute to the film and its fans fans. Younger players will love the varied gameplay, voice acting, and cartoon-accurate graphics."
- Conclusion
- Cars Fast as Lightning is a game that will appeal to fans of the Disney Pixar Cars movies, especially younger ones. It is a game that lets you drive as your favorite Cars characters, customize your cars and tracks, and build your own Radiator Springs. It is also a game that has simple and intuitive controls, high-quality graphics and sound, and plenty of humor and charm. However, it is also a game that might be too simplistic and repetitive for older or more experienced gamers, and that might tempt you to spend real money on in-app purchases to speed up your progress or unlock more items. If you are looking for a fun and colorful racing game for kids, you can download Cars Fast as Lightning on your Android device or on your PC using an emulator. If you are looking for a more challenging and realistic racing game, you might want to look elsewhere.
- FAQs
- Q: Is Cars Fast as Lightning free to play?
-A: Yes, Cars Fast as Lightning is free to download and play on Android devices. However, it also offers in-app purchases that can enhance your gameplay or unlock more items.
- Q: How can I download Cars Fast as Lightning on PC?
-A: You can download Cars Fast as Lightning on PC using an emulator like BlueStacks. An emulator is a software that lets you run Android apps on your computer. You can follow the steps in this article to download and play Cars Fast as Lightning on PC.
- Q: How can I unlock new cars and tracks in Cars Fast as Lightning?
-A: You can unlock new cars and tracks in Cars Fast as Lightning by collecting stickers. Stickers are earned by winning races against other cars. You can also unlock new paint jobs for your cars by collecting stickers.
- Q: How can I upgrade my cars and buildings in Cars Fast as Lightning?
-A: You can upgrade your cars and buildings in Cars Fast as Lightning by spending coins or gems. Coins are earned by racing or collecting from your buildings. Gems are earned by completing quests or watching ads. You can also buy gems with real money.
- Q: How can I perform tricks and boosts in Cars Fast as Lightning?
-A: You can perform tricks and boosts in Cars Fast as Lightning by tapping the screen at the right time when you encounter blue or yellow circles on the track. Blue circles will give you a speed boost, while yellow circles will let you perform a trick. Both tricks and boosts will fill up your turbo meter, which you can use to activate a turbo boost by swiping up on the screen.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simsantonioii/MusicGen-Continuation/app_batched.py b/spaces/simsantonioii/MusicGen-Continuation/app_batched.py
deleted file mode 100644
index 945da7da0abf07f6be156c9c31d4af26db2168cb..0000000000000000000000000000000000000000
--- a/spaces/simsantonioii/MusicGen-Continuation/app_batched.py
+++ /dev/null
@@ -1,195 +0,0 @@
-"""
-Copyright (c) Meta Platforms, Inc. and affiliates.
-All rights reserved.
-
-This source code is licensed under the license found in the
-LICENSE file in the root directory of this source tree.
-"""
-
-from tempfile import NamedTemporaryFile
-import torch
-import gradio as gr
-from share_btn import community_icon_html, loading_icon_html, share_js, css
-
-from audiocraft.data.audio_utils import convert_audio
-from audiocraft.data.audio import audio_write
-from audiocraft.models import MusicGen
-
-
-MODEL = None
-
-
-def load_model():
- print("Loading model")
- return MusicGen.get_pretrained("melody")
-
-
-def predict(texts, melodies):
- global MODEL
- if MODEL is None:
- MODEL = load_model()
-
- duration = 12
- MODEL.set_generation_params(duration=duration)
-
- print(texts, melodies)
- processed_melodies = []
-
- target_sr = 32000
- target_ac = 1
- for melody in melodies:
- if melody is None:
- processed_melodies.append(None)
- else:
- sr, melody = (
- melody[0],
- torch.from_numpy(melody[1]).to(MODEL.device).float().t(),
- )
- if melody.dim() == 1:
- melody = melody[None]
- melody = melody[..., : int(sr * duration)]
- melody = convert_audio(melody, sr, target_sr, target_ac)
- processed_melodies.append(melody)
-
- outputs = MODEL.generate_with_chroma(
- descriptions=texts,
- melody_wavs=processed_melodies,
- melody_sample_rate=target_sr,
- progress=False,
- )
-
- outputs = outputs.detach().cpu().float()
- out_files = []
- for output in outputs:
- with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file:
- audio_write(
- file.name,
- output,
- MODEL.sample_rate,
- strategy="loudness",
- loudness_headroom_db=16,
- loudness_compressor=True,
- add_suffix=False,
- )
- waveform_video = gr.make_waveform(file.name)
- out_files.append(waveform_video)
-
- return [out_files, melodies]
-
-
-def toggle(choice):
- if choice == "mic":
- return gr.update(source="microphone", value=None, label="Microphone")
- else:
- return gr.update(source="upload", value=None, label="File")
-
-
-with gr.Blocks(css=css) as demo:
- gr.Markdown(
- """
- # MusicGen
-
- This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation
- presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284).
-
-
-
- for longer sequences, more control and no queue.
- """
- )
- with gr.Row():
- with gr.Column():
- with gr.Row():
- text = gr.Text(
- label="Describe your music",
- lines=2,
- interactive=True,
- elem_id="text-input",
- )
- with gr.Column():
- radio = gr.Radio(
- ["file", "mic"],
- value="file",
- label="Melody Condition (optional) File or Mic",
- )
- melody = gr.Audio(
- source="upload",
- type="numpy",
- label="File",
- interactive=True,
- elem_id="melody-input",
- )
- with gr.Row():
- submit = gr.Button("Generate")
- with gr.Column():
- output = gr.Video(label="Generated Music", elem_id="generated-video")
- output_melody = gr.Audio(label="Melody ", elem_id="melody-output")
- with gr.Row(visible=False) as share_row:
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button("Share to community", elem_id="share-btn")
- share_button.click(None, [], [], _js=share_js)
- submit.click(
- lambda x: gr.update(visible=False),
- None,
- [share_row],
- queue=False,
- show_progress=False,
- ).then(
- predict,
- inputs=[text, melody],
- outputs=[output, output_melody],
- batch=True,
- max_batch_size=12,
- ).then(
- lambda x: gr.update(visible=True),
- None,
- [share_row],
- queue=False,
- show_progress=False,
- )
- radio.change(toggle, radio, [melody], queue=False, show_progress=False)
- gr.Examples(
- fn=predict,
- examples=[
- [
- "An 80s driving pop song with heavy drums and synth pads in the background",
- "./assets/bach.mp3",
- ],
- [
- "A cheerful country song with acoustic guitars",
- "./assets/bolero_ravel.mp3",
- ],
- [
- "90s rock song with electric guitar and heavy drums",
- None,
- ],
- [
- "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130",
- "./assets/bach.mp3",
- ],
- [
- "lofi slow bpm electro chill with organic samples",
- None,
- ],
- ],
- inputs=[text, melody],
- outputs=[output],
- )
- gr.Markdown(
- """
- ### More details
-
- The model will generate 12 seconds of audio based on the description you provided.
- You can optionaly provide a reference audio from which a broad melody will be extracted.
- The model will then try to follow both the description and melody provided.
- All samples are generated with the `melody` model.
-
- You can also use your own GPU or a Google Colab by following the instructions on our repo.
-
- See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft)
- for more details.
- """
- )
-demo.queue(max_size=60).launch()
diff --git a/spaces/skyxx/skyxxChat/modules/models.py b/spaces/skyxx/skyxxChat/modules/models.py
deleted file mode 100644
index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000
--- a/spaces/skyxx/skyxxChat/modules/models.py
+++ /dev/null
@@ -1,625 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-import base64
-from io import BytesIO
-from PIL import Image
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt=INITIAL_SYSTEM_PROMPT,
- temperature=1.0,
- top_p=1.0,
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt=system_prompt,
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- logging.error(f"获取API使用情况失败:" + str(e))
- return i18n("**获取API使用情况失败**")
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- import traceback
- traceback.print_exc()
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier is not None:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
- def set_key(self, new_access_key):
- ret = super().set_key(new_access_key)
- self._refresh_header()
- return ret
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name) -> None:
- super().__init__(model_name=model_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- ) -> None:
- super().__init__(model_name=model_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = ""
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMChat(BaseLLMModel):
- def __init__(self, api_key):
- super().__init__(model_name="xmchat")
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
- self.last_conv_id = None
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- self.last_conv_id = None
- return [], "已重置"
-
- def image_to_base64(self, image_path):
- # 打开并加载图片
- img = Image.open(image_path)
-
- # 获取图片的宽度和高度
- width, height = img.size
-
- # 计算压缩比例,以确保最长边小于4096像素
- max_dimension = 2048
- scale_ratio = min(max_dimension / width, max_dimension / height)
-
- if scale_ratio < 1:
- # 按压缩比例调整图片大小
- new_width = int(width * scale_ratio)
- new_height = int(height * scale_ratio)
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
-
- # 将图片转换为jpg格式的二进制数据
- buffer = BytesIO()
- if img.mode == "RGBA":
- img = img.convert("RGB")
- img.save(buffer, format='JPEG')
- binary_image = buffer.getvalue()
-
- # 对二进制数据进行Base64编码
- base64_image = base64.b64encode(binary_image).decode('utf-8')
-
- return base64_image
-
- def try_read_image(self, filepath):
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- self.image_bytes = self.image_to_base64(filepath)
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def like(self):
- if self.last_conv_id is None:
- return "点赞失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "good"
- }
- response = requests.post(self.url, json=data)
- return "👍点赞成功,,感谢反馈~"
-
- def dislike(self):
- if self.last_conv_id is None:
- return "点踩失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "bad"
- }
- response = requests.post(self.url, json=data)
- return "👎点踩成功,感谢反馈~"
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- # XMChat的一轮对话中实际上只能处理一张图片
- self.reset()
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- self.last_conv_id = conv_id
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(model_name, lora_model_path)
- elif model_type == ModelType.XMChat:
- if os.environ.get("XMCHAT_API_KEY") != "":
- access_key = os.environ.get("XMCHAT_API_KEY")
- model = XMChat(api_key=access_key)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- logging.error(e)
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- if dont_change_lora_selector:
- return model, msg
- else:
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
-
-
-if __name__ == "__main__":
- with open("config.json", "r") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="chatglm-6b-int4")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/society-ethics/featured-spaces-submissions/app.py b/spaces/society-ethics/featured-spaces-submissions/app.py
deleted file mode 100644
index b37a567a9e28b6fe4a6bb2f6b22bedeab3d1ba64..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/featured-spaces-submissions/app.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import gradio as gr
-import os
-
-hf_writer = gr.HuggingFaceDatasetSaver(
- os.getenv('HUGGING_FACE_HUB_TOKEN'),
- organization="society-ethics",
- dataset_name="featured-spaces-submissions",
- private=True
-)
-
-principles = [
- {
- "title": "🤝 Consentful",
- "content": """
- [What is consentful tech?](https://www.consentfultech.io)
- Consentful technology supports the self-determination of people who use and are affected by these technologies.
-
- For Spaces, some examples of this can include:
-
- - Demonstrating a commitment to acquiring data from willing, informed, and appropriately compensated sources.
- - Designing systems that respect end-user autonomy, e.g. with privacy-preserving techniques.
- - Avoiding extractive, chauvinist, ["dark"](https://www.deceptive.design), and otherwise "unethical" patterns of engagement.
-
- Featured Spaces:
-
- - [lvwerra/in-the-stack-gr](https://huggingface.co/spaces/lvwerra/in-the-stack-gr)
- - [zama-fhe/encrypted_sentiment_analysis](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis)
- """
- },
- {
- "title": "🌎 Sustainable",
- "content": """
- These are Spaces that highlight and explore techniques for making machine learning ecologically sustainable.
-
- Examples could include:
-
- - Tracking emissions from training and running inferences on large language models.
- - Quantization and distillation methods to reduce carbon footprints without sacrificing model quality.
-
- Featured Space:
-
- - [pytorch/MobileNet_v2](https://huggingface.co/spaces/pytorch/MobileNet_v2)
- """
- },
- {
- "title": "👁️🗨️ Socially Conscious",
- "content": """
- "Socially Conscious" Spaces show us how machine learning can be applied as a force for *good*!
-
- This is quite broad, but some examples could be:
-
- - Using machine learning as part of an effort to tackle climate change.
- - Building tools to assist with medical research and practice.
- - Developing models for text-to-speech, image captioning, and other tasks aimed at increasing accessibility.
- - Creating systems for the digital humanities, such as for Indigenous language revitalization.
-
- Featured Space:
-
- - [vict0rsch/climateGAN](https://huggingface.co/spaces/vict0rsch/climateGAN)
- """
- },
- {
- "title": "🧑🤝🧑 Inclusive",
- "content": """
- These are projects which broaden the scope of who *builds* and *benefits* in the machine learning world.
-
- This could mean things like:
-
- - Curating diverse datasets that increase the representation of underserved groups.
- - Training language models on languages that aren't yet available on the Hugging Face Hub.
- - Creating no-code frameworks that allow non-technical folk to engage with AI.
-
- Featured Spaces:
-
- - [hackathon-pln-es/Spanish-Nahuatl-Translation](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)
- - [hackathon-pln-es/readability-assessment-spanish](https://huggingface.co/spaces/hackathon-pln-es/readability-assessment-spanish)
- """
- },
- {
- "title": "✍️ Rigorous",
- "content": """
- Among the many concerns that go into creating new models is a seemingly simple question: "Does it work?"
-
- Rigorous projects pay special attention to examining failure cases, protecting privacy through security
- measures, and ensuring that potential users (technical and non-technical) are informed of the project's
- limitations.
-
- For example:
-
- - Projects built with models that are well-documented with [Model Cards](https://huggingface.co/docs/hub/model-cards).
- - Models that are evaluated against cutting-edge benchmarks, with results reported against disaggregated sets.
- - Demonstrations of models failing across ["gender, skin type, ethnicity, age or other attributes"](http://gendershades.org/overview.html).
- - Techniques for mitigating issues like over-fitting and training data memorization.
-
- Featured Spaces:
-
- - [emilylearning/spurious_correlation_evaluation](https://huggingface.co/spaces/emilylearning/spurious_correlation_evaluation)
- - [ml6team/post-processing-summarization](https://huggingface.co/spaces/ml6team/post-processing-summarization)
- """
- },
- {
- "title": "🤔 Inquisitive",
- "content": """
- Some projects take a radical new approach to concepts which may have become commonplace. These projects, often
- rooted in critical theory, shine a light on inequities and power structures which challenge the community to
- rethink its relationship to technology.
-
- For example:
-
- - [Reframing AI and machine learning from Indigenous perspectives](https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1).
- - [Highlighting LGBTQIA2S+ marginalization in AI](https://edri.org/our-work/computers-are-binary-people-are-not-how-ai-systems-undermine-lgbtq-identity/).
- - [Critiquing the harms perpetuated by AI systems](https://www.ajl.org).
-
- Featured Space:
-
- - [society-ethics/Average_diffusion_faces](https://huggingface.co/spaces/society-ethics/Average_diffusion_faces)
- """
- },
-]
-
-
-def toggle_description(title, content):
- with gr.Accordion(label=title, open=False):
- gr.Markdown(content, elem_id="margin-top")
-
-
-def submit_entry(URL, tags, suggestions, comments):
- hf_writer.flag(
- flag_data=[URL, tags, suggestions, comments]
- )
-
- return [
- gr.Markdown.update(
- visible=True,
- value="Thank you for your submission! 🤗"
- ),
- gr.Button.update(
- visible=False
- ),
- gr.Text.update(interactive=False),
- gr.Checkboxgroup.update(interactive=False),
- gr.Text.update(interactive=False),
- gr.TextArea.update(interactive=False),
- ]
-
-
-with gr.Blocks(css="#margin-top {margin-top: 15px} #center {justify-content: space-between;}") as demo:
- with gr.Row(elem_id="center"):
- with gr.Column(scale=4):
- gr.Markdown("## Call for submissions! 📢")
- with gr.Column():
- gr.Markdown(" ")
-
- gr.Markdown("""
- Hugging Face is collecting examples of [Spaces](https://huggingface.co/spaces) that are ethically mindful, in order to highlight and encourage these kinds of projects – and we would love your input!
-
- If you have built a Space that you think should be featured, or if you would like to nominate someone else's, paste the URL in the form below 🤗
-
- The current set of tags reflect our initial categorization from going through Hugging Face Spaces: 🤝 consentful, 🌎 sustainable, 👁️🗨️ socially conscious, 🧑🤝🧑 inclusive, ✍️ rigorous, and 🤔 inquisitive.
-
- Let us know other relevant categories and examples that you find!
- """)
-
- with gr.Accordion(label="Want to learn more? Visit us over on the Hugging Face Discord!", open=False):
- gr.Markdown("""
- Follow these steps to join the discussion:
-
- 1. Go to [hf.co/join/discord](https://hf.co/join/discord) to join the Discord server.
- 2. Once you've registered, go to the `#role-assignment` channel.
- 3. Select the "Open Science" role.
- 4. Head over to `#ethics-and-society` to join the conversation 🥳
- """, elem_id="margin-top")
-
- with gr.Row():
- with gr.Column():
- gr.Markdown("💡 Click on the terms below to view their description and some examples.")
- with gr.Column():
- [toggle_description(x["title"], x["content"]) for x in principles]
-
- with gr.Column():
- URL = gr.Text(label="URL")
- tags = gr.Checkboxgroup(
- label="Tags - Pick as many as you like!",
- choices=[
- "Consentful",
- "Sustainable",
- "Socially Conscious",
- "Inclusive",
- "Rigorous",
- "Inquisitive",
- ]
- )
- suggestions = gr.Text(label="[Optional] Do you have suggestions for other tags?")
- comments = gr.TextArea(label="[Optional] Any extra comments?")
- submit = gr.Button(value="Submit")
- thank_you = gr.Markdown(visible=False)
-
- submit.click(
- fn=submit_entry,
- inputs=[URL, tags, suggestions, comments],
- outputs=[
- thank_you,
- submit,
- URL,
- tags,
- suggestions,
- comments,
- ]
- )
-
-hf_writer.setup(
- components=[URL, tags, suggestions, comments],
- flagging_dir="flagged"
-)
-
-demo.launch()
diff --git a/spaces/stomexserde/gpt4-ui/Examples/CRACK Table Optimizer.md b/spaces/stomexserde/gpt4-ui/Examples/CRACK Table Optimizer.md
deleted file mode 100644
index 85a887ed21248c7b09d0999aef8d485d6e7e3fd5..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/CRACK Table Optimizer.md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-How to Use CRACK Table Optimizer to Improve Your Cutting Efficiency
-If you are looking for a cutting software that can help you obtain optimal cutting layouts for one (1D) and two (2D) dimensional pieces, you may want to check out CRACK Table Optimizer. This software is designed to handle complex products, such as tables, desks, cupboards, lockers, book shelves, and more. It can also be used for cutting rectangular sheets made of glass, wood, metal, plastic, or any other material used by industrial applications.
-CRACK Table Optimizer Download File → https://urlgoal.com/2uI7TQ
-In this article, we will show you how to use CRACK Table Optimizer to improve your cutting efficiency and reduce waste. We will also explain some of the key features and benefits of this software, as well as how to download and install it on your computer.
-What is CRACK Table Optimizer?
-CRACK Table Optimizer is a cutting software that uses a powerful algorithm to generate optimal cutting patterns for one (1D) and two (2D) dimensional pieces. It can handle any shape and size of material, as well as any number of pieces. It can also take into account the cutting edge thickness, the cutting direction, the number of sections, and the length of cut time.
-CRACK Table Optimizer can perform both guillotine and non-guillotine optimization. Guillotine optimization means that the material is cut from one side to the other in a single direction. This is suitable for materials like glass and wood. Non-guillotine optimization means that the cutting machine can trace the shape of the pieces. This is suitable for materials like concrete or flame retardant.
-CRACK Table Optimizer can also display the data in different units and scales. You can enter fractions or decimals, and choose from metric or imperial units. You can also view the results in graphical or tabular form, and export them to various formats, such as PDF, DXF, XML, CSV, or TXT.
-What are the benefits of using CRACK Table Optimizer?
-Using CRACK Table Optimizer can help you achieve several benefits, such as:
-
-Improving your cutting efficiency: You can save time and money by using the optimal cutting layouts generated by CRACK Table Optimizer. You can also reduce the number of cuts and movements required by your cutting machine.
-Reducing your material waste: You can minimize the amount of material that is left unused or discarded by using CRACK Table Optimizer. You can also reuse the offcuts for other projects or purposes.
-Enhancing your product quality: You can ensure that your products are cut accurately and precisely by using CRACK Table Optimizer. You can also avoid defects or errors that may occur due to improper cutting.
-Customizing your product design: You can create complex products with different shapes and sizes by using CRACK Table Optimizer. You can also adjust the parameters and settings according to your preferences and requirements.
-
-How to download and install CRACK Table Optimizer?
-If you want to try out CRACK Table Optimizer for yourself, you can download it from this link . This is a cracked version of Cutting Optimization Pro[^2^], which is the original software that CRACK Table Optimizer is based on. You can use this version for free without any limitations or restrictions.
-To install CRACK Table Optimizer on your computer, you need to follow these steps:
-
-Download the setup file from this link .
-Run the setup file and follow the instructions on the screen.
-Copy the crack file and replace it to the installation directory.
-Launch CRACK Table Optimizer and enjoy!
-
-How to use CRACK Table Optimizer?
-To use CRACK Table Optimizer to improve your cutting efficiency, you need to follow these steps:
-
-
-Select whether you want to optimize 1D or 2D pieces. cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Hoodlum ((LINK)) Crack Fix.md b/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Hoodlum ((LINK)) Crack Fix.md
deleted file mode 100644
index 2732dfff30341443346cb11b68de939f6da13f66..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Hoodlum ((LINK)) Crack Fix.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-How to Fix GTA San Andreas HOODLUM Crack Issues
-GTA San Andreas is one of the most popular and iconic games in the Grand Theft Auto series. It follows the story of Carl Johnson, a former gangster who returns to his hometown of Los Santos after his mother's murder. However, he soon finds himself in trouble with corrupt cops, rival gangs, and his own family. To play this game on PC, you need a copy of the original GTA San Andreas v1.0 executable file, which can be hard to find nowadays. That's why some players resort to using a no-CD crack by HOODLUM, which allows them to run the game without the disc.
-However, using this crack can also cause some issues, such as crashes, errors, or compatibility problems with mods. If you are facing any of these problems, don't worry. In this article, we will show you how to fix GTA San Andreas HOODLUM crack issues and enjoy the game smoothly.
-gta san andreas hoodlum crack fix Download Zip ✫ https://urlgoal.com/2uIau9
-Method 1: Update Your Game
-One of the possible reasons why you are having issues with the HOODLUM crack is that your game is outdated. The crack was made for the original version of GTA San Andreas, which was released in 2004. Since then, there have been several patches and updates that fixed bugs, improved performance, and added features to the game. Therefore, it is recommended that you update your game to the latest version before using the crack.
-To update your game, you can use the official patch from Rockstar Games[^2^], which will bring your game to version 1.01. This patch will also fix some common issues with the game, such as mouse sensitivity, audio quality, and save file corruption. However, be aware that this patch will also disable some mods and cheats that work only on version 1.0.
-Method 2: Use a Different Crack
-Another possible solution to fix GTA San Andreas HOODLUM crack issues is to use a different crack. The HOODLUM crack is not the only one available for GTA San Andreas. There are other cracks that may work better for your system or your preferences. For example, you can try the SilentPatch, which is a mod that fixes many bugs and glitches in GTA San Andreas. It also includes a no-CD patch that works with any version of the game.
-To use the SilentPatch, you need to download it from its official website and extract it to your GTA San Andreas folder. Then, run the SilentPatchSA.exe file and follow the instructions. This will patch your game and make it compatible with the SilentPatch mod. You can then run the game normally without needing a disc.
-Method 3: Run the Game as Administrator
-A simple but effective way to fix GTA San Andreas HOODLUM crack issues is to run the game as administrator. Sometimes, the game may not have enough permissions to access certain files or folders on your system, which can cause crashes or errors. Running the game as administrator will grant it full access and prevent these problems.
-To run the game as administrator, you need to right-click on the GTA_SA.exe file in your GTA San Andreas folder and select Properties. Then, go to the Compatibility tab and check the box that says Run this program as an administrator. Click Apply and OK to save the changes. You can then run the game normally and see if it works better.
-Conclusion
-GTA San Andreas is a classic game that many players still enjoy today. However, if you want to play it on PC without a disc, you may encounter some issues with the HOODLUM crack. Fortunately, there are some ways to fix these issues and make the game run smoothly.
-
-In this article, we showed you how to fix GTA San Andreas HOODLUM crack issues by updating your game, using a different crack, or running the game as administrator. We hope these methods helped you solve your problems and enjoy GTA San Andreas without any hassle.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stratussox/yolov5_inference/one_image_detection.py b/spaces/stratussox/yolov5_inference/one_image_detection.py
deleted file mode 100644
index c594fcd3ca17c8f2dfc9642e453d885a3047e33b..0000000000000000000000000000000000000000
--- a/spaces/stratussox/yolov5_inference/one_image_detection.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc.
-
-Usage - sources:
- $ python detect.py --weights yolov5s.pt --source 0 # webcam
- img.jpg # image
- vid.mp4 # video
- path/ # directory
- 'path/*.jpg' # glob
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
-
-Usage - formats:
- $ python detect.py --weights yolov5s.pt # PyTorch
- yolov5s.torchscript # TorchScript
- yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
- yolov5s_openvino_model # OpenVINO
- yolov5s.engine # TensorRT
- yolov5s.mlmodel # CoreML (macOS-only)
- yolov5s_saved_model # TensorFlow SavedModel
- yolov5s.pb # TensorFlow GraphDef
- yolov5s.tflite # TensorFlow Lite
- yolov5s_edgetpu.tflite # TensorFlow Edge TPU
- yolov5s_paddle_model # PaddlePaddle
-"""
-
-import argparse
-import os
-import platform
-import sys
-from pathlib import Path
-
-import torch
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[0] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.common import DetectMultiBackend
-from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams, LoadImages2
-from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
- increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import select_device, smart_inference_mode
-
-
-@smart_inference_mode()
-def run(
- img_list,
- weights=ROOT / 'yolov5s.pt', # model path or triton URL
- source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam)
- data=ROOT / 'data/coco128.yaml', # dataset.yaml path
- imgsz=(640, 640), # inference size (height, width)
- conf_thres=0.25, # confidence threshold
- iou_thres=0.45, # NMS IOU threshold
- max_det=1000, # maximum detections per image
- device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- view_img=False, # show results
- save_txt=False, # save results to *.txt
- save_conf=False, # save confidences in --save-txt labels
- save_crop=False, # save cropped prediction boxes
- nosave=False, # do not save images/videos
- classes=None, # filter by class: --class 0, or --class 0 2 3
- agnostic_nms=False, # class-agnostic NMS
- augment=False, # augmented inference
- visualize=False, # visualize features
- update=False, # update all models
- project=ROOT / 'runs/detect', # save results to project/name
- name='exp', # save results to project/name
- exist_ok=False, # existing project/name ok, do not increment
- line_thickness=3, # bounding box thickness (pixels)
- hide_labels=False, # hide labels
- hide_conf=False, # hide confidences
- half=False, # use FP16 half-precision inference
- dnn=False, # use OpenCV DNN for ONNX inference
- vid_stride=1, # video frame-rate stride
-):
- source = str(source)
- save_img = not nosave and not source.endswith('.txt') # save inference images
- is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
- is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
- webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
- screenshot = source.lower().startswith('screen')
- if is_url and is_file:
- source = check_file(source) # download
-
- # Directories
- save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Load model
- device = select_device(device)
- model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
- stride, names, pt = model.stride, model.names, model.pt
- imgsz = check_img_size(imgsz, s=stride) # check image size
-
-
- #Read Images from memory
- # img_list = []
-
- # img1 = cv2.imread(r'data\images\bus.jpg')
- # img_list.append(img1)
- # img2 = cv2.imread(r'data\images\calle.png')
- # img_list.append(img2)
- # img3 = cv2.imread(r'data\images\zidane.jpg')
- # img_list.append(img3)
-
- print("IMG SIZE = ")
- print(len(img_list))
-
- # Dataloader
- bs = 1 # batch_size
- if webcam:
- view_img = check_imshow(warn=True)
- dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
- bs = len(dataset)
- elif screenshot:
- dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
- else:
- #dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
- dataset = LoadImages2(img_list=img_list, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
-
-
- vid_path, vid_writer = [None] * bs, [None] * bs
-
- # Run inference
- model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup
- seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
- count = 0
-
- output = []
- for im, im0s, s in dataset:
-
- save_file_number = str(count)
- save_file_name = "im" + str(count) + ".jpg"
- count = count + 1
-
- with dt[0]:
- im = torch.from_numpy(im).to(model.device)
- im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
- im /= 255 # 0 - 255 to 0.0 - 1.0
- if len(im.shape) == 3:
- im = im[None] # expand for batch dim
-
- # Inference
- with dt[1]:
- pred = model(im, augment=augment, visualize=visualize)
-
- # NMS
- with dt[2]:
- pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
-
- # Second-stage classifier (optional)
- # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
-
- # Process predictions
- for i, det in enumerate(pred): # per image
- seen += 1
-
- im0 = im0s.copy()
-
-
-
- save_path = str(save_dir / save_file_name) # im.jpg
- txt_path = str(save_dir / 'labels' / save_file_number) + '' # im.txt
- s += '%gx%g ' % im.shape[2:] # print string
- gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
- imc = im0.copy() if save_crop else im0 # for save_crop
- annotator = Annotator(im0, line_width=line_thickness, example=str(names))
- if len(det):
- # Rescale boxes from img_size to im0 size
- det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()
-
- # Print results
- for c in det[:, 5].unique():
- n = (det[:, 5] == c).sum() # detections per class
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
-
- # Write results
- for *xyxy, conf, cls in reversed(det):
- if save_txt: # Write to file
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(f'{txt_path}.txt', 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- if save_img or save_crop or view_img: # Add bbox to image
- c = int(cls) # integer class
- label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
- annotator.box_label(xyxy, label, color=colors(c, True))
-
- if save_crop:
- save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / save_file_name, BGR=True)
-
- # Stream results
- im0 = annotator.result()
- # if view_img:
- # if platform.system() == 'Linux' and p not in windows:
- # windows.append(p)
- # cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
- # cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
- # cv2.imshow(str(p), im0)
- # cv2.waitKey(1) # 1 millisecond
-
- # Save results (image with detections)
- save_img = True
- if save_img:
- if dataset.mode == 'image':
- cv2.imwrite(save_path, im0)
- output.append(im0)
-
- # Print time (inference-only)
- LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
-
- # Print results
- t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image
- LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
- if save_txt or save_img:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
- if update:
- strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning)
-
- return output
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL')
- parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)')
- parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
- parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
- parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
- parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--view-img', action='store_true', help='show results')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
- parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
- parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
- parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--visualize', action='store_true', help='visualize features')
- parser.add_argument('--update', action='store_true', help='update all models')
- parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
- parser.add_argument('--name', default='exp', help='save results to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
- parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
- parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
- parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
- parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
- parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride')
- opt = parser.parse_args()
- opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
- print_args(vars(opt))
- return opt
-
-
-def main(opt, img_list):
- check_requirements(exclude=('tensorboard', 'thop'))
- output = run(img_list,**vars(opt))
- return output
-
-def detect(img_list):
- opt = parse_opt()
- output = main(opt,img_list)
- return output
diff --git a/spaces/sub314xxl/MetaGPT/metagpt/roles/software_company.py b/spaces/sub314xxl/MetaGPT/metagpt/roles/software_company.py
deleted file mode 100644
index 9b570f12dda35e127fddbf4e26c6dbc6fa64b491..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/metagpt/roles/software_company.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import os
-from typing import Any, Coroutine
-
-import aiofiles
-from aiobotocore.session import get_session
-from mdutils.mdutils import MdUtils
-from zipstream import AioZipStream
-
-from metagpt.actions import Action
-from metagpt.actions.design_api import WriteDesign
-from metagpt.actions.project_management import WriteTasks
-from metagpt.actions.write_code import WriteCode
-from metagpt.actions.write_prd import WritePRD
-from metagpt.config import CONFIG
-from metagpt.roles import Architect, Engineer, ProductManager, ProjectManager, Role
-from metagpt.schema import Message
-from metagpt.software_company import SoftwareCompany as _SoftwareCompany
-
-
-class RoleRun(Action):
- def __init__(self, role: Role, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.role = role
- action = role._actions[0]
- self.desc = f"{role.profile} {action.desc or str(action)}"
-
-
-class SoftwareCompany(Role):
- """封装软件公司成角色,以快速接入agent store。"""
-
- def __init__(self, name="", profile="", goal="", constraints="", desc="", *args, **kwargs):
- super().__init__(name, profile, goal, constraints, desc, *args, **kwargs)
- company = _SoftwareCompany()
- company.hire([ProductManager(), Architect(), ProjectManager(), Engineer(n_borg=5)])
- self.company = company
- self.uid = CONFIG.workspace.name
-
- def recv(self, message: Message) -> None:
- self.company.start_project(message.content)
-
- async def _think(self) -> Coroutine[Any, Any, bool]:
- """软件公司运行需要4轮
-
- BOSS -> ProductManager -> Architect -> ProjectManager -> Engineer
- BossRequirement -> WritePRD -> WriteDesign -> WriteTasks -> WriteCode
- """
- environment = self.company.environment
- for role in environment.roles.values():
- observed = environment.memory.get_by_actions(role._rc.watch)
- memory = role._rc.memory.get()
- for i in observed:
- if i not in memory:
- self._rc.todo = RoleRun(role)
- return True
- self._rc.todo = None
- return False
-
- async def _act(self) -> Message:
- await self.company.run(1)
- output = self.company.environment.memory.get(1)[0]
- cause_by = output.cause_by
-
- if cause_by is WritePRD:
- output = await self.format_prd(output)
- elif cause_by is WriteDesign:
- output = await self.format_system_design(output)
- elif cause_by is WriteTasks:
- output = await self.format_task(output)
- elif cause_by is WriteCode:
- output = await self.format_code(output)
- return output
-
- async def format_prd(self, prd: Message):
- workspace = CONFIG.workspace
- data = prd.instruct_content.dict()
- mdfile = MdUtils(None)
- title = "Original Requirements"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
-
- title = "Product Goals"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_list(data[title], marked_with="1")
-
- title = "User Stories"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_list(data[title], marked_with="1")
-
- title = "Competitive Analysis"
- mdfile.new_header(2, title, add_table_of_contents=False)
- if all(i.count(":") == 1 for i in data[title]):
- mdfile.new_table(
- 2, len(data[title]) + 1, ["Competitor", "Description", *(i for j in data[title] for i in j.split(":"))]
- )
- else:
- mdfile.new_list(data[title], marked_with="1")
-
- title = "Competitive Quadrant Chart"
- mdfile.new_header(2, title, add_table_of_contents=False)
- competitive_analysis_path = workspace / "resources" / "competitive_analysis.png"
- if competitive_analysis_path.exists():
- key = f"{self.uid}/resources/competitive_analysis.png"
- url = await self.upload_file_to_s3(competitive_analysis_path, key)
- mdfile.new_line(mdfile.new_inline_image(title, url))
- else:
- mdfile.insert_code(data[title], "mermaid")
-
- title = "Requirement Analysis"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
-
- title = "Requirement Pool"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_table(
- 2, len(data[title]) + 1, ["Task Description", "Priority", *(i for j in data[title] for i in j)]
- )
-
- title = "UI Design draft"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
-
- title = "Anything UNCLEAR"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
- return Message(mdfile.get_md_text(), cause_by=prd.cause_by, role=prd.role)
-
- async def format_system_design(self, design: Message):
- workspace = CONFIG.workspace
- data = design.instruct_content.dict()
- mdfile = MdUtils(None)
-
- title = "Implementation approach"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
-
- title = "Python package name"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
-
- title = "File list"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_list(data[title], marked_with="1")
-
- title = "Data structures and interface definitions"
- mdfile.new_header(2, title, add_table_of_contents=False)
- data_api_design_path = workspace / "resources" / "data_api_design.png"
- if data_api_design_path.exists():
- key = f"{self.uid}/resources/data_api_design.png"
- url = await self.upload_file_to_s3(data_api_design_path, key)
- mdfile.new_line(mdfile.new_inline_image(title, url))
- else:
- mdfile.insert_code(data[title], "mermaid")
-
- title = "Program call flow"
- mdfile.new_header(2, title, add_table_of_contents=False)
- seq_flow_path = workspace / "resources" / "seq_flow.png"
- if seq_flow_path.exists():
- key = f"{self.uid}/resources/seq_flow.png"
- url = await self.upload_file_to_s3(seq_flow_path, key)
- mdfile.new_line(mdfile.new_inline_image(title, url))
- else:
- mdfile.insert_code(data[title], "mermaid")
-
- title = "Anything UNCLEAR"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_paragraph(data[title])
- return Message(mdfile.get_md_text(), cause_by=design.cause_by, role=design.role)
-
- async def format_task(self, task: Message):
- data = task.instruct_content.dict()
- mdfile = MdUtils(None)
- title = "Required Python third-party packages"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
-
- title = "Required Other language third-party packages"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
-
- title = "Full API spec"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
-
- title = "Logic Analysis"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_table(
- 2, len(data[title]) + 1, ["Filename", "Class/Function Name", *(i for j in data[title] for i in j)]
- )
-
- title = "Task list"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.new_list(data[title])
-
- title = "Shared Knowledge"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
-
- title = "Anything UNCLEAR"
- mdfile.new_header(2, title, add_table_of_contents=False)
- mdfile.insert_code(data[title], "python")
- return Message(mdfile.get_md_text(), cause_by=task.cause_by, role=task.role)
-
- async def format_code(self, code: Message):
- mdfile = MdUtils(None)
-
- for name, conetnt in code.instruct_content.items():
- mdfile.new_header(2, name, add_table_of_contents=False)
- suffix = name.rsplit(".", maxsplit=1)[-1]
- mdfile.insert_code(conetnt, "python" if suffix == "py" else suffix)
-
- url = await self.upload()
- mdfile.new_header(2, "Project Packaging Complete", add_table_of_contents=False)
-
- mdfile.new_paragraph(
- "We are thrilled to inform you that our project has been successfully packaged "
- "and is ready for download and use. You can download the packaged project through"
- f" the following link:\n[Project Download Link]({url})"
- )
- return Message(mdfile.get_md_text(), cause_by=code.cause_by, role=code.role)
-
- async def upload_file_to_s3(self, filepath: str, key: str):
- async with aiofiles.open(filepath, "rb") as f:
- content = await f.read()
- return await self.upload_to_s3(content, key)
-
- async def upload_to_s3(self, content: bytes, key: str):
- session = get_session()
- async with session.create_client(
- "s3",
- aws_secret_access_key=os.getenv("S3_SECRET_KEY"),
- aws_access_key_id=os.getenv("S3_ACCESS_KEY"),
- endpoint_url=os.getenv("S3_ENDPOINT_URL"),
- use_ssl=os.getenv("S3_SECURE"),
- ) as client:
- # upload object to amazon s3
- bucket = os.getenv("S3_BUCKET")
- await client.put_object(Bucket=bucket, Key=key, Body=content)
- return f"{os.getenv('S3_ENDPOINT_URL')}/{bucket}/{key}"
-
- async def upload(self):
- engineer: Engineer = self.company.environment.roles["Engineer"]
- name = engineer.get_workspace().name
- files = []
- workspace = CONFIG.workspace
- workspace = str(workspace)
- for r, _, fs in os.walk(workspace):
- _r = r[len(workspace):].lstrip("/")
- for f in fs:
- files.append({"file": os.path.join(r, f), "name": os.path.join(_r, f)})
- # aiozipstream
- chunks = []
- async for chunk in AioZipStream(files, chunksize=32768).stream():
- chunks.append(chunk)
- key = f"{self.uid}/metagpt-{name}.zip"
- return await self.upload_to_s3(b"".join(chunks), key)
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/CS GO Patch V5 NOSTEAM.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/CS GO Patch V5 NOSTEAM.md
deleted file mode 100644
index 81f2f34d59b6c0bb65aa77924a3847cb2b0fbfd8..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/CS GO Patch V5 NOSTEAM.md
+++ /dev/null
@@ -1,8 +0,0 @@
-CS GO Patch V5 NOSTEAM Download 🌟 https://cinurl.com/2uEXQm
-
-CS:GO received a free version! As long as you have Steam, you can now download it to play offline against... â¤
-[Updated]
-[UPDATED] 8a78ff9644
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grade 10 Filipino Module 2nd Quarter Pdf Download WORK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grade 10 Filipino Module 2nd Quarter Pdf Download WORK.md
deleted file mode 100644
index d194a89cb48d3123c1af19a4ad620fa7e61e8648..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grade 10 Filipino Module 2nd Quarter Pdf Download WORK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Grade 10 Filipino Module 2nd Quarter Pdf Download Download ✑ https://cinurl.com/2uEZ0X
-
-Quarter 1 & 2 · Quarter 3 ... Edukasyon sa Pagpapakatao. Quarter 1 - 4 TG · Quarter 1 - 4 LM. Filipino. Linggo 1 - 19 ... Teaching Guide 4 Teaching Guide 10 · Teaching ... Grade 9. Araling Panlipunan. Teaching Guide · Learner's Module Q1. 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gv License Manager Error Code 15 0 0 FREE.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gv License Manager Error Code 15 0 0 FREE.md
deleted file mode 100644
index a461b9b476f1956dfa6125fcc7353abfa828b07b..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gv License Manager Error Code 15 0 0 FREE.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-The error code 15 0 0 is raised when a network error is received by the server. The same error codes may also be obtained when a firewalled client attempts to connect to a port that is not permitted. You can use sqlnet.ora
file sqlnet.ora
to configure out-of-band breaks with for those messages. Out-of-band breaks can also be configured using the OUT_OF_BAND parameter in TNSNAMES, for example (the suggested default is TRUE):
-When Sql*Plus exits unexpectedly (for example, due to an unexpected signal), Oracle Client will not raise error codes, but instead will return the error code exited by SqlPlus. To obtain details on the problem, connect to the DBMS using SQL*Plus. Typically, the SQL*Plus error code appears in the output section of Sql*Plus. This will usually be terminated by the SqlPlus exit message. Other situations which might not cause Sql*Plus to exit, but might generate a problem are invalid commands or an invalid TNS connect string. These might cause a non-zero exit code, which will be reported.
-Gv license manager error code 15 0 0 Download Zip ⚙ https://cinurl.com/2uEY9g
-Another situation that might cause Sql*Plus to unexpectedly exit is an unexpected response from the DBMS. At this point, the DBMS is not handling a request for your session, and Sql*Plus is exiting. When this happens, Sql*Plus does not exit normally; instead it exits with error code 4294967295 and a brief message. An example might be the following:
-When multiple databases are configured with the same connection string, you can specify that to be used for the database connection. This configuration is relevant when the two databases use the same default
database. This configuration option takes a list of databases defined in the databases
option. Only the databases that are defined in the databases
entry will be used. A shared password for any of the defined databases can be set through this option.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py
deleted file mode 100644
index 26adcd430926de0862204a71d345f2543167f27b..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='EMAHead',
- in_channels=2048,
- in_index=3,
- channels=256,
- ema_channels=512,
- num_bases=64,
- num_stages=3,
- momentum=0.1,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_interpolate.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_interpolate.py
deleted file mode 100644
index 203f47f05d58087e034fb3cd8cd6a09233947b4a..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_interpolate.py
+++ /dev/null
@@ -1,68 +0,0 @@
-from typing import Tuple
-
-import torch
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['three_interpolate_forward', 'three_interpolate_backward'])
-
-
-class ThreeInterpolate(Function):
- """Performs weighted linear interpolation on 3 features.
-
- Please refer to `Paper of PointNet++ `_
- for more details.
- """
-
- @staticmethod
- def forward(ctx, features: torch.Tensor, indices: torch.Tensor,
- weight: torch.Tensor) -> torch.Tensor:
- """
- Args:
- features (Tensor): (B, C, M) Features descriptors to be
- interpolated
- indices (Tensor): (B, n, 3) index three nearest neighbors
- of the target features in features
- weight (Tensor): (B, n, 3) weights of interpolation
-
- Returns:
- Tensor: (B, C, N) tensor of the interpolated features
- """
- assert features.is_contiguous()
- assert indices.is_contiguous()
- assert weight.is_contiguous()
-
- B, c, m = features.size()
- n = indices.size(1)
- ctx.three_interpolate_for_backward = (indices, weight, m)
- output = torch.cuda.FloatTensor(B, c, n)
-
- ext_module.three_interpolate_forward(
- features, indices, weight, output, b=B, c=c, m=m, n=n)
- return output
-
- @staticmethod
- def backward(
- ctx, grad_out: torch.Tensor
- ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
- """
- Args:
- grad_out (Tensor): (B, C, N) tensor with gradients of outputs
-
- Returns:
- Tensor: (B, C, M) tensor with gradients of features
- """
- idx, weight, m = ctx.three_interpolate_for_backward
- B, c, n = grad_out.size()
-
- grad_features = torch.cuda.FloatTensor(B, c, m).zero_()
- grad_out_data = grad_out.data.contiguous()
-
- ext_module.three_interpolate_backward(
- grad_out_data, idx, weight, grad_features.data, b=B, c=c, n=n, m=m)
- return grad_features, None, None
-
-
-three_interpolate = ThreeInterpolate.apply
diff --git a/spaces/syam417/rvc/infer_pack/modules.py b/spaces/syam417/rvc/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/syam417/rvc/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/tammm/vits-models/app.py b/spaces/tammm/vits-models/app.py
deleted file mode 100644
index e41932ae3e0a20837c5740859b4be34253c59b82..0000000000000000000000000000000000000000
--- a/spaces/tammm/vits-models/app.py
+++ /dev/null
@@ -1,264 +0,0 @@
-# coding=utf-8
-import os
-import re
-import argparse
-import utils
-import commons
-import json
-import torch
-import gradio as gr
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from torch import no_grad, LongTensor
-import gradio.processing_utils as gr_processing_utils
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-hps_ms = utils.get_hparams_from_file(r'config/config.json')
-
-audio_postprocess_ori = gr.Audio.postprocess
-
-def audio_postprocess(self, y):
- data = audio_postprocess_ori(self, y)
- if data is None:
- return None
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
-
-
-gr.Audio.postprocess = audio_postprocess
-
-def get_text(text, hps, is_symbol):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def create_tts_fn(net_g_ms, speaker_id):
- def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol):
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if limitation:
- text_len = len(re.sub("\[([A-Z]{2})\]", "", text))
- max_len = 100
- if is_symbol:
- max_len *= 3
- if text_len > max_len:
- return "Error: Text is too long", None
- if not is_symbol:
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms, is_symbol)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- sid = LongTensor([speaker_id]).to(device)
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "Success", (22050, audio)
- return tts_fn
-
-def create_to_symbol_fn(hps):
- def to_symbol_fn(is_symbol_input, input_text, temp_lang):
- if temp_lang == 0:
- clean_text = f'[ZH]{input_text}[ZH]'
- elif temp_lang == 1:
- clean_text = f'[JA]{input_text}[JA]'
- else:
- clean_text = input_text
- return _clean_text(clean_text, hps.data.text_cleaners) if is_symbol_input else ''
-
- return to_symbol_fn
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- elif language == 1:
- return 0.6, 0.668, 1
- else:
- return 0.6, 0.668, 1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio");
- let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- args = parser.parse_args()
- device = torch.device(args.device)
-
- models = []
- with open("pretrained_models/info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for i, info in models_info.items():
- if not info['enable']:
- continue
- sid = info['sid']
- name_en = info['name_en']
- name_zh = info['name_zh']
- title = info['title']
- cover = f"pretrained_models/{i}/{info['cover']}"
- example = info['example']
- language = info['language']
- net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0,
- **hps_ms.model)
- utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None)
- _ = net_g_ms.eval().to(device)
- models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms)))
- with gr.Blocks() as app:
- gr.Markdown(
- "# vits-models\n"
- "## Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n"
- "## ·请不要生成会对个人以及组织造成侵害的内容\n"
- "\n\n"
- "[Open In Colab]"
- "(https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)"
- " without queue and length limitation.(无需等待队列,并且没有长度限制)\n\n"
- "[Finetune your own model](https://github.com/SayaSS/vits-finetuning)"
- )
-
- with gr.Tabs():
- with gr.TabItem("EN"):
- for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models:
- with gr.TabItem(name_en):
- with gr.Row():
- gr.Markdown(
- ''
- f'
{title} '
- f'
' if cover else ""
- '
'
- )
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Text (100 words limitation)" if limitation else "Text", lines=5, value=example, elem_id=f"input-text-en-{name_en.replace(' ','')}")
- lang = gr.Dropdown(label="Language", choices=["Chinese", "Japanese", "Mix(wrap the Chinese text with [ZH][ZH], wrap the Japanese text with [JA][JA])"],
- type="index", value=language)
- with gr.Accordion(label="Advanced Options", open=False):
- symbol_input = gr.Checkbox(value=False, label="Symbol input")
- symbol_list = gr.Dataset(label="Symbol list", components=[input_text],
- samples=[[x] for x in hps_ms.symbols])
- symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False)
- btn = gr.Button(value="Generate", variant="primary")
- with gr.Row():
- ns = gr.Slider(label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="noise_scale_w", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="length_scale", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="Output Message")
- o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio-en-{name_en.replace(' ','')}")
- download = gr.Button("Download Audio")
- btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2], api_name=f"tts-{name_en}")
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}"))
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- symbol_input.change(
- to_symbol_fn,
- [symbol_input, input_text, lang],
- [input_text]
- )
- symbol_list.click(None, [symbol_list, symbol_list_json], [input_text],
- _js=f"""
- (i,symbols) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#input-text-en-{name_en.replace(' ', '')}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return text_input.value;
- }}""")
- with gr.TabItem("中文"):
- for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models:
- with gr.TabItem(name_zh):
- with gr.Row():
- gr.Markdown(
- ''
- f'
{title} '
- f'
' if cover else ""
- '
'
- )
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="文本 (100字上限)" if limitation else "文本", lines=5, value=example, elem_id=f"input-text-zh-{name_zh}")
- lang = gr.Dropdown(label="语言", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"],
- type="index", value="中文"if language == "Chinese" else "日语")
- with gr.Accordion(label="高级选项", open=False):
- symbol_input = gr.Checkbox(value=False, label="符号输入")
- symbol_list = gr.Dataset(label="符号列表", components=[input_text],
- samples=[[x] for x in hps_ms.symbols])
- symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False)
- btn = gr.Button(value="生成", variant="primary")
- with gr.Row():
- ns = gr.Slider(label="控制感情变化程度", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="控制音素发音长度", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="控制整体语速", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="输出信息")
- o2 = gr.Audio(label="输出音频", elem_id=f"tts-audio-zh-{name_zh}")
- download = gr.Button("下载音频")
- btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2])
- download.click(None, [], [], _js=download_audio_js.format(audio_id=f"zh-{name_zh}"))
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- symbol_input.change(
- to_symbol_fn,
- [symbol_input, input_text, lang],
- [input_text]
- )
- symbol_list.click(None, [symbol_list, symbol_list_json], [input_text],
- _js=f"""
- (i,symbols) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#input-text-zh-{name_zh}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + symbols[i].length;
- text_input.selectionEnd = startPos + symbols[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return text_input.value;
- }}""")
- app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share)
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Corel Draw X8 Free Download Full Version With Crack !FREE!.md b/spaces/terfces0erbo/CollegeProjectV2/Corel Draw X8 Free Download Full Version With Crack !FREE!.md
deleted file mode 100644
index bf05f8deac6d0b2f75ed56e464403e7b12e95542..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Corel Draw X8 Free Download Full Version With Crack !FREE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Corel draw x8 free download full version with crack Download File ⇔ https://bytlly.com/2uGlbv
-
- 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Eleventa Multicaja Full Crack 427.md b/spaces/terfces0erbo/CollegeProjectV2/Eleventa Multicaja Full Crack 427.md
deleted file mode 100644
index 2a81789047b9cde08bab72491bdfcaa5944a82f9..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Eleventa Multicaja Full Crack 427.md
+++ /dev/null
@@ -1,6 +0,0 @@
-eleventa multicaja full crack 427 Download File ––– https://bytlly.com/2uGiMF
-
-Descargar crack para Abarrotes Punto de Venta o eleventa Punto de Venta. Instalación Full de Punto de Venta Abarrotes … Oct 03, 2015 · Hola ... 4d29de3e1b
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Excel Password Recovery Master Version 3.5.0.3 Crack Keygen VERIFIED.md b/spaces/terfces0erbo/CollegeProjectV2/Excel Password Recovery Master Version 3.5.0.3 Crack Keygen VERIFIED.md
deleted file mode 100644
index 5bdac05322a82e5af680e33260f7b06bd6cfa3c9..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Excel Password Recovery Master Version 3.5.0.3 Crack Keygen VERIFIED.md
+++ /dev/null
@@ -1,8 +0,0 @@
-excel password recovery master version 3.5.0.3 crack keygen Download ⇔ https://bytlly.com/2uGiD7
-
-June 8, 2559 BC. - Aura Video Editor crack.rar excel password recovery master version 3.5.0.3 crack keygen chorabali full movie free download from torrent. June 6, 2559 BC - Aura Video Editor crack.rar excel password recovery master version 3.5.0.3 crack keygen chorabali full movie free download from torrent.
-June 1, 2559 BC - Aura Video Editor crack.rar excel password recovery master version 3.5.0.3 crack keygen chorabali full movie free download from torrent.
-May 31, 2559 BC - Aura Video Editor crack.rar excel password recovery master version 3.5.0.3 crack keygen chorabali full movie torrent free download. 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hamamatsu USB C9254-01 Drivers For Windows 7 64-bit [HOT].md b/spaces/terfces0erbo/CollegeProjectV2/Hamamatsu USB C9254-01 Drivers For Windows 7 64-bit [HOT].md
deleted file mode 100644
index a035f4fe68e9f9da62b8f8e64d76367c69f3fbf7..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Hamamatsu USB C9254-01 Drivers For Windows 7 64-bit [HOT].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Hamamatsu USB C9254-01 drivers for Windows 7 64-bit Download Zip ○ https://bytlly.com/2uGlZN
-
-5 Software Eksplorer Gratis Untuk HP OS Symbian · Need For Speed APK ... Hamamatsu USB C9254-01 Drivers For Windows 7 64-bit [WORK]. 1fdad05405
-
-
-
diff --git a/spaces/tfwang/PITI-Synthesis/glide_text2im/glide_util.py b/spaces/tfwang/PITI-Synthesis/glide_text2im/glide_util.py
deleted file mode 100644
index dfdeba294f2739ea5af8fe922bfbfe9260674aca..0000000000000000000000000000000000000000
--- a/spaces/tfwang/PITI-Synthesis/glide_text2im/glide_util.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-from typing import Tuple
-#from . import dist_util
-import PIL
-import numpy as np
-import torch as th
-from .script_util import (
- create_gaussian_diffusion,
- create_model_and_diffusion,
- model_and_diffusion_defaults,
-)
-
-# Sample from the base model.
-
-#@th.inference_mode()
-def sample(
- glide_model,
- glide_options,
- side_x,
- side_y,
- prompt,
- batch_size=1,
- guidance_scale=4,
- device="cpu",
- prediction_respacing="100",
- upsample_enabled=False,
- upsample_temp=0.997,
- mode = '',
-):
-
- eval_diffusion = create_gaussian_diffusion(
- steps=glide_options["diffusion_steps"],
- learn_sigma=glide_options["learn_sigma"],
- noise_schedule=glide_options["noise_schedule"],
- predict_xstart=glide_options["predict_xstart"],
- rescale_timesteps=glide_options["rescale_timesteps"],
- rescale_learned_sigmas=glide_options["rescale_learned_sigmas"],
- timestep_respacing=prediction_respacing
- )
-
- # Create the classifier-free guidance tokens (empty)
- full_batch_size = batch_size * 2
- cond_ref = prompt['ref']
- uncond_ref = th.ones_like(cond_ref)
-
- model_kwargs = {}
- model_kwargs['ref'] = th.cat([cond_ref, uncond_ref], 0).to(device)
-
- def cfg_model_fn(x_t, ts, **kwargs):
- half = x_t[: len(x_t) // 2]
- combined = th.cat([half, half], dim=0)
- model_out = glide_model(combined, ts, **kwargs)
- eps, rest = model_out[:, :3], model_out[:, 3:]
- cond_eps, uncond_eps = th.split(eps, len(eps) // 2, dim=0)
-
- half_eps = uncond_eps + guidance_scale * (cond_eps - uncond_eps)
-
- eps = th.cat([half_eps, half_eps], dim=0)
- return th.cat([eps, rest], dim=1)
-
-
- if upsample_enabled:
- model_kwargs['low_res'] = prompt['low_res'].to(device)
- noise = th.randn((batch_size, 3, side_y, side_x), device=device) * upsample_temp
- model_fn = glide_model # just use the base model, no need for CFG.
- model_kwargs['ref'] = model_kwargs['ref'][:batch_size]
-
- samples = eval_diffusion.p_sample_loop(
- model_fn,
- (batch_size, 3, side_y, side_x), # only thing that's changed
- noise=noise,
- device=device,
- clip_denoised=True,
- progress=False,
- model_kwargs=model_kwargs,
- cond_fn=None,
- )[:batch_size]
-
- else:
- model_fn = cfg_model_fn # so we use CFG for the base model.
- noise = th.randn((batch_size, 3, side_y, side_x), device=device)
- noise = th.cat([noise, noise], 0)
- samples = eval_diffusion.p_sample_loop(
- model_fn,
- (full_batch_size, 3, side_y, side_x), # only thing that's changed
- noise=noise,
- device=device,
- clip_denoised=True,
- progress=False,
- model_kwargs=model_kwargs,
- cond_fn=None,
- )[:batch_size]
-
- return samples
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Design and Analysis of Algorithms By A.A.Puntambekar A Comprehensive Textbook for Computer Science Students.md b/spaces/tialenAdioni/chat-gpt-api/logs/Design and Analysis of Algorithms By A.A.Puntambekar A Comprehensive Textbook for Computer Science Students.md
deleted file mode 100644
index a782e1d03ff0f448448d6271198563713ffc155c..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Design and Analysis of Algorithms By A.A.Puntambekar A Comprehensive Textbook for Computer Science Students.md
+++ /dev/null
@@ -1,163 +0,0 @@
-
-Design and Analysis of Algorithms By A.A.Puntambekar: A Book Review
-
-If you are looking for a comprehensive and easy-to-understand textbook on algorithm design and analysis, you might want to check out Design and Analysis of Algorithms By A.A.Puntambekar. This book covers the fundamental concepts and techniques of algorithm design, such as functions and relations, vectors and matrices, efficiency analysis, divide and conquer, dynamic programming, greedy algorithm, backtracking, branch and bound, string matching, and NP completeness. It also provides illustrative examples, pseudo code, and exercises to help you master the topics.
-
-What are the benefits of reading Design and Analysis of Algorithms By A.A.Puntambekar?
-
-There are many benefits of reading Design and Analysis of Algorithms By A.A.Puntambekar. Some of them are:
-Design and Analysis of Algorithms By A.A.Puntambekar DOWNLOAD ✫✫✫ https://urlcod.com/2uK8WO
-
-
-You will learn the basic principles and methods of algorithm design and analysis, which are essential for solving complex problems in computer science and engineering.
-You will gain a deeper understanding of how algorithms work, how to measure their performance, and how to compare different algorithms for the same problem.
-You will develop your logical thinking and problem-solving skills, which are useful for any field of study or career.
-You will be able to apply the algorithms you learn to various real-world applications, such as sorting, searching, cryptography, compression, pattern recognition, optimization, graph theory, and more.
-
-
-Who is the author of Design and Analysis of Algorithms By A.A.Puntambekar?
-
-The author of Design and Analysis of Algorithms By A.A.Puntambekar is Prof. Anuradha A. Puntambekar. She is a former assistant professor in Vishwakarma Institute of Technology (VIT) and PES Modern College of Engineering, Pune. She has expertise in various topics such as data structures, compiler design, theory of computation, object-oriented programming, database management systems, and web technologies. She has also researched on heterogeneous clustering and published her papers in a national symposium. She is known for her unique teaching style and in-depth knowledge of various subjects.
-
-How can you get a copy of Design and Analysis of Algorithms By A.A.Puntambekar?
-
-You can get a copy of Design and Analysis of Algorithms By A.A.Puntambekar from various online platforms such as Google Books, Open Library, Amazon, Flipkart, etc. You can also find it in your local library or bookstore. The book is published by Technical Publications and has 376 pages. The ISBN numbers are 8184317786 and 9788184317787.
-
-Conclusion
-
-Design and Analysis of Algorithms By A.A.Puntambekar is a well-organized textbook that provides the design techniques of algorithms in a simple and straight forward manner. It covers the fundamental concepts and methods of algorithm design and analysis with illustrative examples and exercises. It also helps you to apply the algorithms to various real-world problems. It is suitable for students, teachers, researchers, and professionals who want to learn more about algorithm design and analysis.
-What are the topics covered in Design and Analysis of Algorithms By A.A.Puntambekar?
-
-The book is divided into eight chapters, each covering a different topic related to algorithm design and analysis. The topics are:
-
-
-Introduction: This chapter introduces the basic concepts of algorithm, function and relation, vector and matrix, asymptotic notation, recurrence relation, and master theorem.
-Divide and Conquer: This chapter explains the divide and conquer strategy of algorithm design, with examples such as binary search, merge sort, quick sort, matrix multiplication, Strassen's algorithm, and Karatsuba algorithm.
-Dynamic Programming: This chapter describes the dynamic programming technique of algorithm design, with examples such as Fibonacci series, matrix chain multiplication, longest common subsequence, optimal binary search tree, knapsack problem, and all pairs shortest path.
-Greedy Algorithm: This chapter illustrates the greedy algorithm technique of algorithm design, with examples such as activity selection problem, fractional knapsack problem, Huffman coding, minimum spanning tree, Kruskal's algorithm, Prim's algorithm, and Dijkstra's algorithm.
-Backtracking: This chapter discusses the backtracking technique of algorithm design, with examples such as n-queens problem, Hamiltonian cycle problem, subset sum problem, graph coloring problem, and sudoku solver.
-Branch and Bound: This chapter explores the branch and bound technique of algorithm design, with examples such as 0/1 knapsack problem, traveling salesman problem, assignment problem, and n-queens problem.
-String Matching Algorithms: This chapter presents various string matching algorithms such as naive algorithm, Rabin-Karp algorithm, Knuth-Morris-Pratt algorithm, Boyer-Moore algorithm, and Karp-Rabin fingerprinting.
-Introduction to NP Completeness: This chapter introduces the concept of NP completeness, NP hard problems, polynomial time reduction, Cook's theorem, and some examples of NP complete problems such as satisfiability problem, clique problem, vertex cover problem, subset sum problem, and traveling salesman problem.
-
-
-How can you test your knowledge of Design and Analysis of Algorithms By A.A.Puntambekar?
-
-The book provides several ways to test your knowledge of Design and Analysis of Algorithms By A.A.Puntambekar. Some of them are:
-
-
-Review Questions: At the end of each chapter, there are review questions that help you to recall the main points of the chapter.
-Exercise Problems: At the end of each chapter, there are exercise problems that challenge you to apply the concepts and techniques learned in the chapter.
-Laboratory Programs: At the end of each chapter, there are laboratory programs that require you to implement the algorithms discussed in the chapter using a programming language such as C or Java.
-University Question Papers: At the end of the book, there are university question papers from previous years that test your understanding of the entire book.
-
-
-Final Words
-
-Design and Analysis of Algorithms By A.A.Puntambekar is a comprehensive textbook that covers the design techniques of algorithms in a simple and straight forward manner. It is suitable for students who want to learn more about algorithm design and analysis. It is also useful for teachers who want to teach this subject effectively. It is also helpful for researchers who want to explore new algorithms or improve existing ones. It is also beneficial for professionals who want to solve complex problems using efficient algorithms.
-
-If you are interested in Design and Analysis of Algorithms By A.A.Puntambekar , you can get a copy from various online platforms or your local library or bookstore. You can also read more reviews or ratings from other readers on Google Books or Open Library. You can also share your feedback or questions with us in the comments section below. We hope you enjoyed this article and learned something new from it.
-A.A.Puntambekar algorithms book pdf download
-Design and Analysis of Algorithms By A.A.Puntambekar review
-How to study Design and Analysis of Algorithms By A.A.Puntambekar
-Best price for Design and Analysis of Algorithms By A.A.Puntambekar
-Design and Analysis of Algorithms By A.A.Puntambekar solutions manual
-Design and Analysis of Algorithms By A.A.Puntambekar online course
-Design and Analysis of Algorithms By A.A.Puntambekar syllabus
-Design and Analysis of Algorithms By A.A.Puntambekar ebook
-Design and Analysis of Algorithms By A.A.Puntambekar flipkart
-Design and Analysis of Algorithms By A.A.Puntambekar amazon
-Design and Analysis of Algorithms By A.A.Puntambekar summary
-Design and Analysis of Algorithms By A.A.Puntambekar notes
-Design and Analysis of Algorithms By A.A.Puntambekar mcq
-Design and Analysis of Algorithms By A.A.Puntambekar ppt
-Design and Analysis of Algorithms By A.A.Puntambekar video lectures
-Design and Analysis of Algorithms By A.A.Puntambekar quora
-Design and Analysis of Algorithms By A.A.Puntambekar reddit
-Design and Analysis of Algorithms By A.A.Puntambekar goodreads
-Design and Analysis of Algorithms By A.A.Puntambekar github
-Design and Analysis of Algorithms By A.A.Puntambekar youtube
-Design and Analysis of Algorithms By A.A.Puntambekar vs cormen
-Design and Analysis of Algorithms By A.A.Puntambekar vs sedgewick
-Design and Analysis of Algorithms By A.A.Puntambekar vs skiena
-Design and Analysis of Algorithms By A.A.Puntambekar vs dasgupta
-Design and Analysis of Algorithms By A.A.Puntambekar vs kleinberg
-Topics covered in Design and Analysis of Algorithms By A.A.Puntambekar
-Difficulty level of Design and Analysis of Algorithms By A.A.Puntambekar
-Prerequisites for Design and Analysis of Algorithms By A.A.Puntambekar
-Benefits of learning from Design and Analysis of Algorithms By A.A.Puntambekar
-Drawbacks of learning from Design and Analysis of Algorithms By A.A.Puntambekar
-Comparison between different editions of Design and Analysis of Algorithms By A.A.Puntambekar
-Errata for Design and Analysis of Algorithms By A.A.Puntambekar
-Sample questions from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample solutions from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample programs from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample projects from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample assignments from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample exams from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample grades from Design and Analysis of Algorithms By A.A.Puntambekar
-Sample feedback from Design and Analysis of Algorithms By A.A.Puntambekar
-Testimonials from students who used Design and Analysis of Algorithms By A.A.Puntambekar
-Recommendations for other books on algorithms by A.A.Puntambekar
-Recommendations for other books on algorithms by other authors
-Recommendations for other resources on algorithms besides books
-Recommendations for other courses on algorithms besides online courses
-Recommendations for other topics on algorithms besides design and analysis
-Recommendations for other applications of algorithms besides computer science
-Recommendations for other skills to learn along with algorithms
-Recommendations for other careers to pursue with algorithms knowledge
-Recommendations for other hobbies to enjoy with algorithms interest
-What are the features of Design and Analysis of Algorithms By A.A.Puntambekar?
-
-The book has many features that make it a valuable resource for learning algorithm design and analysis. Some of them are:
-
-
-The book is written in a simple and clear language, with a conceptual approach that helps you to grasp the concepts easily.
-The book provides a balanced coverage of both theoretical and practical aspects of algorithm design and analysis, with an emphasis on problem-solving skills.
-The book follows a systematic and logical presentation of the topics, with a proper flow and coherence.
-The book includes numerous diagrams, tables, figures, and charts that illustrate the concepts and algorithms visually.
-The book contains many solved examples that demonstrate the application of the algorithms to various problems.
-The book offers multiple choice questions, short answer questions, and long answer questions at the end of each chapter that test your comprehension and retention of the concepts.
-The book provides references to other books and websites for further reading and exploration of the topics.
-
-
-What are the reviews of Design and Analysis of Algorithms By A.A.Puntambekar?
-
-The book has received positive reviews from many readers who have used it for learning algorithm design and analysis. Some of the reviews are:
-
-"This book is very helpful for students who want to learn algorithm design and analysis. It covers all the important topics in a simple and easy way. The examples and exercises are very useful for practice. The book is also well-organized and well-written. I recommend this book to anyone who wants to learn algorithm design and analysis."
-
-"I have used this book for my course on algorithm design and analysis. It is a very good book that explains the concepts and techniques clearly and concisely. The book also provides many examples and problems that help you to apply the algorithms to various situations. The book is also up-to-date and relevant to the current trends in computer science and engineering."
-
-"This book is one of the best books on algorithm design and analysis. It covers all the essential topics in a comprehensive and detailed manner. The book also has a conceptual approach that helps you to understand the logic behind the algorithms. The book also has many features that make it a user-friendly and interactive book. I highly recommend this book to anyone who wants to master algorithm design and analysis."
-How can you improve your skills in Design and Analysis of Algorithms By A.A.Puntambekar?
-
-There are many ways to improve your skills in Design and Analysis of Algorithms By A.A.Puntambekar. Some of them are:
-
-
-Read the book carefully and thoroughly, and try to understand the concepts and techniques explained in the book.
-Practice the examples and exercises given in the book, and try to solve them on your own or with the help of a friend or a tutor.
-Implement the algorithms using a programming language of your choice, and test them on different inputs and outputs.
-Explore other sources of information on algorithm design and analysis, such as online courses, videos, blogs, podcasts, etc.
-Join online communities or forums where you can discuss algorithm design and analysis with other learners or experts.
-Participate in online contests or challenges where you can apply your algorithm design and analysis skills to solve real-world problems.
-
-
-What are some alternatives to Design and Analysis of Algorithms By A.A.Puntambekar?
-
-If you are looking for some alternatives to Design and Analysis of Algorithms By A.A.Puntambekar, you might want to check out these books:
-
-
-Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein: This book is a classic and comprehensive reference on algorithm design and analysis, covering a wide range of topics such as sorting, searching, graph algorithms, network flow, computational geometry, cryptography, approximation algorithms, parallel algorithms, and more.
-The Algorithm Design Manual by Steven S. Skiena: This book is a practical guide on algorithm design and analysis, covering various aspects such as algorithm analysis, data structures, graph algorithms, combinatorial algorithms, geometric algorithms, string algorithms, randomized algorithms, heuristic algorithms, and more.
-Algorithm Design by Jon Kleinberg and Éva Tardos: This book is a modern and accessible introduction to algorithm design and analysis, focusing on the use of mathematical techniques such as linear programming, network flow, approximation algorithms, randomized algorithms, online algorithms, and more.
-
-
-Conclusion
-
-In this article, we have reviewed Design and Analysis of Algorithms By A.A.Puntambekar , a comprehensive textbook that covers the design techniques of algorithms in a simple and straight forward manner. We have discussed the benefits of reading this book, the author of this book, how to get a copy of this book, what are the topics covered in this book, how to test your knowledge of this book, what are the features of this book, what are the reviews of this book, how to improve your skills in this book, and what are some alternatives to this book. We hope you found this article informative and useful for learning algorithm design and analysis.
-Conclusion
-
-In this article, we have reviewed Design and Analysis of Algorithms By A.A.Puntambekar , a comprehensive textbook that covers the design techniques of algorithms in a simple and straight forward manner. We have discussed the benefits of reading this book, the author of this book, how to get a copy of this book, what are the topics covered in this book, how to test your knowledge of this book, what are the features of this book, what are the reviews of this book, how to improve your skills in this book, and what are some alternatives to this book. We hope you found this article informative and useful for learning algorithm design and analysis.
679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free TOP Download Green Zone.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free TOP Download Green Zone.md
deleted file mode 100644
index 1e80c567dc3deb5e85d445a7edd4fdc9df6db8a2..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Free TOP Download Green Zone.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-How to Free Download Green Zone Movie Online
-Green Zone is a 2010 war thriller film directed by Paul Greengrass and starring Matt Damon, Greg Kinnear, and Amy Ryan. The film is loosely based on the 2006 book Imperial Life in the Emerald City by journalist Rajiv Chandrasekaran, which depicts the chaotic aftermath of the 2003 invasion of Iraq.
-free download Green Zone Download Zip ---> https://urlcod.com/2uK5To
-If you are a fan of action-packed movies with political intrigue and realistic combat scenes, you might want to watch Green Zone online. However, finding a reliable and legal source to stream or download the movie can be challenging. That's why we have prepared this guide to help you free download Green Zone movie online safely and legally.
-What You Need to Free Download Green Zone Movie Online
-Before you start downloading Green Zone movie online, you need to make sure you have the following things:
-
-A device that can access the internet, such as a computer, smartphone, tablet, or smart TV.
-A high-speed internet connection that can handle large file downloads.
-A VPN service that can protect your online privacy and security, as well as bypass geo-restrictions and censorship.
-A torrent client that can download and manage torrent files, such as uTorrent, BitTorrent, or qBittorrent.
-A torrent site that can provide you with the torrent file or magnet link for Green Zone movie, such as The Pirate Bay, RARBG, or 1337x.
-
-How to Free Download Green Zone Movie Online Step by Step
-Once you have everything ready, you can follow these simple steps to free download Green Zone movie online:
-
-Launch your VPN service and connect to a server in a country where torrenting is legal and safe, such as Switzerland, Spain, or Canada.
-Open your torrent client and go to the torrent site of your choice. Search for "Green Zone" and filter the results by video quality, file size, seeders, and leechers. Choose the torrent file or magnet link that suits your preferences and click on it.
-Your torrent client will start downloading the movie file to your device. Depending on your internet speed and the number of seeders available, this may take from a few minutes to several hours.
-Once the download is complete, you can open the movie file with your preferred media player and enjoy watching Green Zone online for free.
-
-Why You Need a VPN to Free Download Green Zone Movie Online
-You might be wondering why you need a VPN to free download Green Zone movie online. Here are some of the reasons why a VPN is essential for torrenting:
-
-
-A VPN can hide your IP address and encrypt your traffic, making it impossible for anyone to track your online activities or identity. This way, you can avoid legal troubles, fines, or lawsuits from copyright holders or authorities who might monitor your torrenting activities.
-A VPN can also help you access geo-blocked or censored content from anywhere in the world. For example, if you are in a country where The Pirate Bay is blocked, you can use a VPN to connect to a server in another country where The Pirate Bay is accessible and download Green Zone movie online without any hassle.
-A VPN can also improve your torrenting speed and performance by preventing bandwidth throttling from your ISP or network administrator. Some ISPs or networks might slow down your internet speed if they detect that you are using a lot of bandwidth for torrenting. A VPN can prevent this by masking your traffic and making it look like normal browsing.
-
-Conclusion
-Green Zone is a thrilling and engaging movie that will keep you on the edge of your seat. If you want to watch it online for free, you can use this guide to free download Green Zone movie online with a VPN and a torrent client. However, please note that we do not condone or encourage any illegal or unethical activities. You should always respect the rights of the creators and owners of the content you want to watch and only use this guide for educational purposes.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Inpaint 8 Serial Key.zip Full Version Remove Watermarks Date Stamps and More.md b/spaces/tialenAdioni/chat-gpt-api/logs/Inpaint 8 Serial Key.zip Full Version Remove Watermarks Date Stamps and More.md
deleted file mode 100644
index 7440a51c0b1214d4ce5e13c58d72ccd016bad634..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Inpaint 8 Serial Key.zip Full Version Remove Watermarks Date Stamps and More.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-Inpaint 8 Serial Key.zip Full Version: What Is It and How to Get It?
-If you are looking for a simple and effective way to remove unwanted objects, people, watermarks, date stamps, and more from your photos, you might want to try Inpaint 8. Inpaint 8 is a powerful photo editing software that allows you to reconstruct the selected image area from the pixels near the area boundary. In other words, it can magically fill the selected area with intelligently-generated textures pulled from the surrounding image data.
-But before you can enjoy the magic of Inpaint 8, you need to download, install, and activate it on your computer. And for that, you need a serial key. A serial key is a unique code that identifies your copy of the software and proves that you have purchased it legally. Without a serial key, you cannot activate Inpaint 8 and use its full features.
-Inpaint 8 Serial Key.zip Full Version Download File ⇔ https://urlcod.com/2uKb3c
-So how can you get Inpaint 8 Serial Key.zip Full Version? The answer is simple: you can download it from the official website of Teorex, the developer of Inpaint. Teorex offers a free trial version of Inpaint 8 that you can use for up to 15 days. However, if you want to use it beyond that period, you need to buy a license that comes with a serial key. The license costs $19.99 for a single user and $99.99 for a business user. You can pay with PayPal or credit card and get your serial key instantly via email.
-To download Inpaint 8 Serial Key.zip Full Version from the official website, follow these steps:
-
-Go to https://www.theinpaint.com/download.html and click on the Download button for Windows or Mac, depending on your operating system.
-Save the zip file on your computer and locate it in your downloads folder.
-Extract the zip file using a program like WinZip or WinRAR.
-Open the extracted folder and double-click on the setup file to start the installation process.
-
-How to Install and Activate Inpaint 8 with Serial Key?
-Once you have downloaded Inpaint 8 Serial Key.zip Full Version from the official website, you need to install and activate it on your computer. To do that, follow these steps:
-
-After double-clicking on the setup file, follow the instructions on the screen to complete the installation process.
-Launch Inpaint 8 from your desktop or start menu.
-Go to Help > Enter the serial key.
-Type or paste your serial key that you received via email after purchasing a license.
-Click on OK to activate your copy of Inpaint 8.
-
-Congratulations! You have successfully installed and activated Inpaint 8 with serial key. You can now use it to edit your photos without any limitations.
-Inpaint 8 Crack Download Free Full Version
-How to Activate Inpaint 8 with Serial Key.zip
-Inpaint 8 License Key Generator Online
-Inpaint 8 Full Version Free Download with Crack
-Inpaint 8 Serial Number Activation Code.zip
-Download Inpaint 8 Patched Full Version
-Inpaint 8 Registration Key.zip Free Download
-Inpaint 8 Full Crack + Serial Key.zip
-Inpaint 8 Keygen Download Full Version
-How to Install Inpaint 8 with Serial Key.zip
-Inpaint 8 Full Version with Crack and Keygen.zip
-Inpaint 8 Activation Key.zip Download Free
-Inpaint 8 Cracked Full Version Download
-Inpaint 8 Serial Key.zip + Crack Download
-Inpaint 8 Patch Download Full Version
-How to Get Inpaint 8 Serial Key.zip for Free
-Inpaint 8 License Code.zip Free Download
-Inpaint 8 Full Version with Serial Key.zip Download
-Inpaint 8 Crack + Keygen.zip Download Free
-How to Use Inpaint 8 with Serial Key.zip
-Inpaint 8 Full Crack Download with Serial Key.zip
-Inpaint 8 Registration Code.zip Download Free
-Inpaint 8 Serial Key.zip Full Version Free Download
-Inpaint 8 Crack + Serial Number.zip Download
-Inpaint 8 Patched Full Version with Serial Key.zip
-How to Register Inpaint 8 with Serial Key.zip
-Inpaint 8 License Key.zip Download Full Version
-Inpaint 8 Full Version with Crack and Patch.zip
-Inpaint 8 Activation Code.zip Free Download
-Inpaint 8 Cracked Full Version with Serial Key.zip
-Inpaint 8 Serial Key.zip + Patch Download Free
-How to Crack Inpaint 8 with Serial Key.zip
-Inpaint 8 License Number.zip Free Download
-Inpaint 8 Full Version with Serial Number and Crack.zip
-Inpaint 8 Crack + Activation Key.zip Download Free
-How to Uninstall Inpaint 8 with Serial Key.zip
-Inpaint 8 License File.zip Download Full Version
-Inpaint 8 Full Version with Patch and Keygen.zip
-Inpaint 8 Activation File.zip Free Download
-Inpaint 8 Cracked Full Version with Patch and Serial Key.zip
-How to Update Inpaint 8 with Serial Key.zip
-Inpaint 8 License Folder.zip Download Full Version
-Inpaint 8 Full Version with Activation Code and Crack.zip
-How to Backup and Restore Inpaint 8 with Serial Key.zip
-How to Use Inpaint 8 to Remove Unwanted Objects from Your Photos?
-In this section, we will show you how to use Inpaint 8 to remove unwanted objects from your photos in a few simple steps. For example, let's say you have a photo like this:
-
- You want to remove the car from the photo and make it look like this:
-
- Here is how you can do it with Inpaint 8:
-
-Open your photo in Inpaint 8 by clicking on File > Open or dragging and dropping it into the program window.
-Select the unwanted object (the car) by using one of the selection tools on the toolbar. You can use the Marker tool, the Lasso tool, or the Magic Wand tool depending on your preference. You can also adjust the size and hardness of your selection brush by using the slider at the bottom.
-
- Click on the Inpaint button on the toolbar or press F5 on your keyboard. Wait for a few seconds while Inpaint reconstructs the selected area with a new texture that matches the rest of the photo.
-
- Repeat steps 2 and 3 for any other unwanted objects that you want to remove from your photo.
-When you are satisfied with the result, click on File > Save or Save As to save your edited photo in your preferred format and location.
-
-That's it! You have just used Inpaint 8 to remove unwanted objects from your photos in a few simple steps. You can also use Inpaint 8 to remove other types of imperfections from your photos, such as blemishes, wrinkles, scratches, stains, and more.
-What Are the Benefits of Using Inpaint 8?
-Inpaint 8 is not just a tool for removing unwanted objects from your photos. It is also a tool for enhancing the quality and appearance of your photos. Here are some of the benefits of using Inpaint 8:
-
-It improves the composition and aesthetics of your photos by removing distracting elements and focusing on the main subject.
-It preserves the original quality and resolution of your photos by using advanced algorithms that generate realistic textures and colors.
-It saves you time and effort by doing the hard work for you. You don't need to manually clone, crop, or mask your photos. You just need to select the area and click on Inpaint.
-It offers a variety of features and tools that allow you to customize your editing process. You can adjust the size and hardness of your selection brush, change the inpainting algorithm, zoom in and out, undo and redo, and more.
-It supports multiple formats and platforms. You can use Inpaint 8 on Windows or Mac computers and edit photos in JPG, PNG, BMP, TIFF, WEBP, and other formats.
-
-Inpaint 8 is a versatile and powerful photo editing software that can help you create stunning photos with minimal effort. Whether you want to remove unwanted objects, repair old photos, retouch portraits, or create artistic effects, Inpaint 8 can do it for you.
-Conclusion
-In this article, we have shown you what Inpaint 8 Serial Key.zip Full Version is and how to get it from the official website. We have also shown you how to install and activate Inpaint 8 with serial key, how to use Inpaint 8 to remove unwanted objects from your photos, and what are the benefits of using Inpaint 8. We hope that you have found this article useful and informative. If you have any questions or feedback, please feel free to leave a comment below.
-If you are interested in trying Inpaint 8 for yourself, you can download it from https://www.theinpaint.com/download.html and get a free trial for up to 15 days. If you want to use it beyond that period, you can buy a license for $19.99 for a single user or $99.99 for a business user. You will receive your serial key via email and be able to activate your copy of Inpaint 8.
-Inpaint 8 is a great tool for anyone who loves photography and wants to make their photos look better. It is easy to use, fast, and effective. It can remove any unwanted object from any photo with just a few clicks. It can also improve the quality and appearance of your photos by generating realistic textures and colors. It can save you time and effort by doing the hard work for you. It can also offer you a variety of features and tools that allow you to customize your editing process.
-So what are you waiting for? Download Inpaint 8 Serial Key.zip Full Version today and start creating amazing photos with Inpaint 8!
-FAQs
-Here are some of the frequently asked questions about Inpaint 8:
-
-Is Inpaint 8 safe and legal to use? Yes, Inpaint 8 is safe and legal to use. It does not contain any viruses, malware, or spyware. It does not collect or share any personal information or data. It does not violate any copyright or intellectual property rights. It is developed by Teorex, a reputable software company that has been creating photo editing software since 2007.
-How many photos can I edit with Inpaint 8? You can edit as many photos as you want with Inpaint 8. There is no limit on the number or size of photos that you can edit with Inpaint 8. However, if you are using the free trial version of Inpaint 8, you can only use it for up to 15 days. After that period, you need to buy a license to continue using Inpaint 8.
-Can I use Inpaint 8 on my mobile device? No, Inpaint 8 is not available for mobile devices. It is only available for Windows or Mac computers. However, Teorex also offers other photo editing apps for mobile devices, such as TouchRetouch (for iOS and Android) and iResizer (for iOS). You can check them out on https://www.theinpaint.com/mobile-apps.html .
-What are some of the limitations of Inpaint 8? Inpaint 8 is not a perfect tool that can remove any object from any photo flawlessly. It has some limitations that depend on various factors, such as the complexity of the object, the background texture, the lighting conditions, etc. Sometimes, Inpaint 8 may produce unnatural results or leave some traces of the object behind. In such cases, you may need to use some of the advanced techniques and tools that Inpaint 8 offers, such as the donor area and the guide lines. You can read more about them in the tutorials section of the Inpaint website .
-Where can I get more information and support for Inpaint 8? You can get more information and support for Inpaint 8 by visiting the official website of Teorex . There you can find more tutorials, tips, tricks, FAQs, and contact details. You can also follow Teorex on social media platforms such as Facebook, Twitter, and YouTube to get the latest news and updates about Inpaint 8 and other photo editing software.
-
- : https://www.theinpaint.com/tutorials : https://www.theinpaint.com 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dawn AI The App that Uses AI to Generate Outstanding Avatars.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dawn AI The App that Uses AI to Generate Outstanding Avatars.md
deleted file mode 100644
index 1c5203ffd18a4579bbbe8c31b49cf0481edfb28f..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Dawn AI The App that Uses AI to Generate Outstanding Avatars.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-Download App Dawn AI: How to Create Amazing Avatars with AI Technology
-Have you ever wondered what you would look like as a 3D render, a fine art painting, or an anime character? Do you want to surprise your friends with fun and unique images that are generated by artificial intelligence? If so, you should download app Dawn AI, a powerful and easy-to-use avatar generator that lets you create outstanding avatars using the latest AI technology. In this article, we will show you what Dawn AI is, how to download and install it on your device, how to use it to create amazing avatars, and why you should try it today.
-What is Dawn AI?
-A brief introduction to the app and its features
-Dawn AI is an app that allows you to create outstanding avatars using the latest AI technology. Just upload your photos and let Dawn work its magic—showing you and your friends in an incredible mix of styles and settings. And all at the click of a button.
-download app dawn ai DOWNLOAD ✏ ✏ ✏ https://bltlly.com/2uOltv
-With Dawn’s innovative technology, you can surprise your friends with content that’s never been seen before. Our AI analyzes your photos to learn what you look like, then produces stunning portraits with thousands of possible styles. See yourself sketched in black and white or painted in vibrant color. Browse your own AI-generated selfies, styled as hyperrealistic photos, classical art, and more. Just upload your pictures and let our AI generator do the rest! All with a single click.
-COMING SOON! Plus, try our theme packs and purchase sets of images by style. Pick the packs that suit your personality, then sit back and relax while our AI generator gets to work. Share the results with your friends—and the world!
-How to download and install the app on your device
-Dawn AI is available for both Android and iOS devices. You can download it from the Google Play Store or the App Store , depending on your device. The app is free to download and use, but it offers in-app purchases for some premium features.
-To download and install the app on your device, follow these simple steps:
-
-Open the Google Play Store or the App Store on your device.
-Search for "Dawn AI" or use the links provided above.
-Tap on the app icon and then tap on "Install" or "Get".
-Wait for the app to download and install on your device.
-Open the app and grant it the necessary permissions to access your photos and camera.
-Enjoy creating amazing avatars with Dawn AI!
-
-How to use Dawn AI to create outstanding avatars
-How to upload your photos and let the app work its magic
-Once you have downloaded and installed the app on your device, you can start creating outstanding avatars with Dawn AI. To do so, follow these simple steps:
-
-Open the app and tap on the "+" icon at the bottom of the screen.
-Select a photo from your gallery or take a new one with your camera.
-Wait for the app to upload and process your photo.
-See the results on the screen and swipe left or right to browse different styles and settings for your avatar.
-
-How to explore different styles and settings for your avatars
-Dawn AI offers you a variety of styles and settings for your avatars, ranging from realistic to artistic, from modern to classic, from cute to cool. You can explore them by swiping left or right on the screen, or by tapping on the icons at the bottom of the screen. Here are some of the options you can choose from:
-
-Realistic: See yourself as a 3D render with realistic lighting and shadows.
-Artistic: See yourself as a fine art painting with different brushes and colors.
-Anime: See yourself as an anime character with big eyes and expressive features.
-Cartoon: See yourself as a cartoon character with simple shapes and bright colors.
-Sketch: See yourself as a sketch with black and white lines and shading.
-Watercolor: See yourself as a watercolor painting with soft edges and gradients.
-Oil: See yourself as an oil painting with thick strokes and textures.
-Pencil: See yourself as a pencil drawing with fine details and contrasts.
-
-You can also adjust the settings for each style, such as the intensity, the brightness, the contrast, the saturation, and the hue. Just tap on the gear icon at the top right corner of the screen and use the sliders to customize your avatar.
-How to share your creations on social media and with your friends
-Once you are happy with your avatar, you can share it on social media and with your friends. To do so, follow these simple steps:
-How to use Dawn AI to create stunning portraits with AI
-Dawn AI apk mod download: unlock premium features for free
-Dawn AI app review: transform your selfies with AI
-Dawn AI @me tag: create your own avatars with text prompts
-Dawn AI sketch to image: draw your own artwork and let AI enhance it
-Best Dawn AI styles and filters: explore endless possibilities with AI
-Dawn AI vs Meitu AI Art: which app is better for generating AI art?
-How to share your Dawn AI creations on social media platforms
-Dawn AI pricing and subscription: how to get the most out of the app
-Dawn AI generated art app: how does it work and what can it do?
-Dawn AI app features: image recreation, gender switch, large database, and more
-How to download Dawn AI app for Android and iOS devices
-Dawn AI app tutorial: how to generate images from words or sketches
-Dawn AI community feed: get inspired by other users' creations
-How to use Dawn AI app for fun and entertainment purposes
-How to use Dawn AI app for education and learning purposes
-How to use Dawn AI app for professional and business purposes
-How to use Dawn AI app for personal and artistic purposes
-How to use Dawn AI app for social and cultural purposes
-How to use Dawn AI app for historical and fictional purposes
-How to generate realistic images with Dawn AI photorealism style
-How to generate fantasy images with Dawn AI fantasy style
-How to generate oil painting images with Dawn AI oil painting style
-How to generate cinematic images with Dawn AI cinematic lighting style
-How to generate Kodak film images with Dawn AI Kodak film style
-How to generate fine art images with Dawn AI fine art style
-How to generate hyperrealistic images with Dawn AI hyperrealism style
-How to generate anime images with Dawn AI anime style
-How to generate impressionist images with Dawn AI impressionism style
-How to generate Pixar images with Dawn AI Pixar style
-How to generate Disney images with Dawn AI Disney style
-How to generate Unreal Engine images with Dawn AI Unreal Engine style
-How to face swap with celebrities using Dawn AI app
-How to see yourself as a different gender using Dawn AI app
-How to see yourself as a historical figure using Dawn AI app
-How to see yourself as a fictional character using Dawn AI app
-How to see yourself as a superhero using Dawn AI app
-How to edit photos of film stars using Dawn AI app
-How to go viral on social media using Dawn AI app
-How to have fun creating unique content using Dawn AI app
-What is the technology behind Dawn AI app?
-What are the benefits of using Dawn AI app?
-What are the challenges of using Dawn AI app?
-What are the future plans of Dawn AI app?
-What are the alternatives of Dawn AI app?
-What are the best practices of using Dawn AI app?
-What are the tips and tricks of using Dawn AI app?
-What are the FAQs of using Dawn AI app?
-
-Tap on the share icon at the top left corner of the screen.
-Select the platform or app you want to share your avatar on, such as Facebook, Instagram, Twitter, WhatsApp, etc.
-Add a caption or a message if you want to.
-Tap on "Send" or "Post" to share your avatar.
-
-You can also save your avatar to your device by tapping on the download icon at the top right corner of the screen. You can then use it as your profile picture, wallpaper, or anything else you want.
-Why you should try Dawn AI today
-The benefits of using AI technology to generate unique content
-Dawn AI is more than just an app—it's a powerful tool that uses artificial intelligence to generate unique content. By using Dawn AI, you can enjoy these benefits:
-
-You can create outstanding avatars that are unlike anything else you have seen before.
-You can express yourself in different ways and show different aspects of your personality.
-You can have fun and be creative without any skills or effort required.
-You can discover new styles and settings that you may not have thought of before.
-You can impress your friends and followers with your amazing avatars.
-
-The fun and creative possibilities of Dawn AI
-Dawn AI is not only a tool—it's also a source of fun and creativity. By using Dawn AI, you can explore these possibilities:
-
-You can create avatars for yourself or for your friends and family.
-You can create avatars for different occasions and events, such as birthdays, holidays, anniversaries, etc.
-You can create avatars for different moods and emotions, such as happy, sad, angry, surprised, etc.
-You can create avatars for different themes and genres, such as fantasy, sci-fi, horror, romance, etc.
-You can create avatars for different challenges and games, such as guessing who is who, making funny faces, swapping genders, etc.
-
- The user reviews and ratings of Dawn AI
- Dawn AI is not only a great app—it's also a popular app. It has received many positive reviews and ratings from users who have tried it. Here are some of them:
- "This app is amazing! I love how it transforms my photos into stunning portraits. It's like having my own personal artist. I highly recommend it!" - Anna
- "Wow! This app is so cool! I can see myself in different styles and settings. It's fun to play with and share with my friends. It is the best app ever!" - Ben
- "I'm amazed by this app! It's so easy to use and the results are incredible. I can create avatars that look like me or completely different. It's a great way to express myself and have fun. I love it!" - Chloe
- Dawn AI has also received a high rating of 4.8 out of 5 stars on the Google Play Store and 4.9 out of 5 stars on the App Store. This shows that users are satisfied with the app and its performance.
- Conclusion
-A summary of the main points and a call to action
-In conclusion, Dawn AI is an app that lets you create outstanding avatars using the latest AI technology. You can download it for free from the Google Play Store or the App Store, depending on your device. You can upload your photos and let the app work its magic, showing you and your friends in an incredible mix of styles and settings. You can explore different options and customize your avatars to suit your preferences. You can share your creations on social media and with your friends, or save them to your device. You can enjoy the benefits of using AI technology to generate unique content, the fun and creative possibilities of Dawn AI, and the user reviews and ratings of Dawn AI.
- So what are you waiting for? Download app Dawn AI today and start creating amazing avatars with AI technology. You will be amazed by what you can do with Dawn AI!
- FAQs
-What is the minimum Android version required to run Dawn AI?
-The minimum Android version required to run Dawn AI is 5.0 (Lollipop) or higher.
-How much does Dawn AI cost?
-Dawn AI is free to download and use, but it offers in-app purchases for some premium features, such as theme packs and unlimited styles.
-Is Dawn AI safe and secure?
-Yes, Dawn AI is safe and secure. It does not collect or store any personal information from your device or photos. It only uses your photos to generate avatars and does not share them with anyone else.
-Can I use Dawn AI to create avatars for other people or animals?
-Yes, you can use Dawn AI to create avatars for other people or animals, as long as you have their permission or they are in the public domain. However, the results may vary depending on the quality and angle of the photos.
-How can I contact the developers of Dawn AI?
-If you have any questions, feedback, or suggestions for Dawn AI, you can contact the developers by emailing them at support@dawnai.com or by visiting their website at https://dawnai.com .
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/tigergoo/ai/README.md b/spaces/tigergoo/ai/README.md
deleted file mode 100644
index b19812f324a19d297860474118fbc000b714fca6..0000000000000000000000000000000000000000
--- a/spaces/tigergoo/ai/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ai
-emoji: 📉
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/AUTOCAD V2013 KEYGEN Xfautocadkg X64zip ((EXCLUSIVE)).md b/spaces/tioseFevbu/cartoon-converter/scripts/AUTOCAD V2013 KEYGEN Xfautocadkg X64zip ((EXCLUSIVE)).md
deleted file mode 100644
index 63894fde74a070a7d5dda41f704cd853c128c813..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/AUTOCAD V2013 KEYGEN Xfautocadkg X64zip ((EXCLUSIVE)).md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-According to the web search results, AUTOCAD is a computer-aided design (CAD) and drafting software application developed by Autodesk[^6^] [^7^]. It is used for creating and editing 2D and 3D designs in various industries such as architecture, engineering, product design, manufacturing, construction, and more[^6^] [^8^].
-
-I can try to write a title and an article with HTML formatting for the keyword "AUTOCAD V2013 KEYGEN Xfautocadkg X64zip", but I cannot guarantee the quality or accuracy of the content. Here is my attempt:
-
-How to Install and Activate AutoCAD V2013 with Xfautocadkg X64zip
-AutoCAD V2013 is a version of the popular CAD software that was released in 2012 by Autodesk. It offers new features and enhancements for 2D and 3D design, such as cloud rendering, cross-sectional drawings, realistic lighting, and more. If you want to install and activate AutoCAD V2013 on your 64-bit Windows system, you will need a keygen file called Xfautocadkg X64zip. This file is a crack tool that can generate a serial number and a product key for AutoCAD V2013. However, using this file may be illegal and risky, as it may contain viruses or malware that can harm your computer or compromise your data. Therefore, we do not recommend using this file or any other crack tool for AutoCAD V2013. Instead, you should purchase a legitimate license from Autodesk or use a free trial version of AutoCAD V2013.
-If you still want to use Xfautocadkg X64zip at your own risk, here are the steps to install and activate AutoCAD V2013 with it:
-AUTOCAD V2013 KEYGEN Xfautocadkg X64zip Download ✯ https://urlcod.com/2uHwBe
-
-Download AutoCAD V2013 from the official Autodesk website or from a trusted source. Make sure you download the 64-bit version that matches your system.
-Extract the downloaded file to a folder on your computer. You will see an installer file called setup.exe.
-Run the installer file and follow the instructions on the screen. Choose the option to install AutoCAD V2013 as a trial version.
-When the installation is complete, do not launch AutoCAD V2013 yet. Instead, go to the folder where you extracted Xfautocadkg X64zip. You will see two files: xf-autocad-kg_x64.exe and xf-autocad-kg_x86.exe.
-Run xf-autocad-kg_x64.exe as administrator. A window will open with two fields: Request Code and Activation Code.
-Launch AutoCAD V2013 and choose the option to activate it. A window will open with a request code. Copy this code and paste it into the Request Code field in xf-autocad-kg_x64.exe.
-Click Generate in xf-autocad-kg_x64.exe. An activation code will appear in the Activation Code field. Copy this code and paste it into the activation window in AutoCAD V2013.
-Click Next in AutoCAD V2013. A message will appear saying that your activation was successful.
-Close both windows and enjoy using AutoCAD V2013.
-
-Note: This article is for educational purposes only. We do not endorse or support using crack tools or pirated software. Please use AutoCAD V2013 legally and responsibly.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cbt Nuggets Ccna Security Torrent !!INSTALL!! Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cbt Nuggets Ccna Security Torrent !!INSTALL!! Download.md
deleted file mode 100644
index 606976c683d22f57b5d07fe7061329e22b44b6f6..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Cbt Nuggets Ccna Security Torrent !!INSTALL!! Download.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-How to Download CBT Nuggets CCNA Security Courses for Free
-If you are looking for a way to download CBT Nuggets CCNA Security courses for free, you may be disappointed to find out that the new course videos have some sort of anti-download mechanism that prevents you from grabbing them with IDM or any other video downloader. There also used to be a cbtnuggets-dl script that was able to download content from the website, but it stopped working when the website was updated.
-However, there are still some alternatives that you can try to get access to these courses without paying a dime. Here are some of them:
-Cbt Nuggets Ccna Security Torrent Download Download File >>>>> https://urlcod.com/2uHywJ
-
-Use a torrent site. Some of the CBT Nuggets courses used to be available on torrent sites like The Pirate Bay, Kickass Torrents, or 1337x. You can search for the course name or keywords on these sites and see if you can find a working torrent link. However, be careful of fake or malicious torrents and always use a VPN and antivirus software when downloading from torrent sites.
-Use a Reddit community. There are some Reddit communities dedicated to sharing and requesting pirated content, such as r/Piracy or r/megalinks. You can browse these communities or post a request for the CBT Nuggets CCNA Security courses and see if someone can help you out. However, be aware of the rules and etiquette of these communities and do not spam or beg for links.
-Use a YouTube downloader. Some of the CBT Nuggets CCNA Security courses may be uploaded on YouTube by other users. You can search for the course name or keywords on YouTube and see if you can find any relevant videos. Then, you can use a YouTube downloader tool or extension to download the videos to your device. However, be mindful of the quality and legality of these videos and do not infringe on the copyright of CBT Nuggets.
-
-These are some of the ways that you can try to download CBT Nuggets CCNA Security courses for free. However, we do not condone or encourage piracy and we recommend that you support the original creators by purchasing their courses if you find them useful and valuable.
-
-If you decide to download CBT Nuggets CCNA Security courses for free, you should also be aware of the risks and drawbacks of doing so. Here are some of them:
-
-You may miss out on the latest updates and features. CBT Nuggets constantly updates and improves their courses to reflect the changes and trends in the industry. If you download an outdated version of the course, you may not get the most accurate and relevant information and skills that you need to pass the CCNA Security exam.
-You may compromise your security and privacy. Downloading from torrent sites or other untrusted sources may expose you to malware, viruses, spyware, or other harmful programs that can damage your device or steal your personal data. You may also face legal consequences if you are caught downloading or distributing copyrighted content without permission.
-You may lose the opportunity to interact with the instructors and peers. CBT Nuggets offers a variety of features and benefits to their subscribers, such as live chat, quizzes, labs, practice exams, coaching sessions, study groups, and more. These features can help you enhance your learning experience and get feedback and support from the experts and other learners. If you download the courses for free, you may miss out on these valuable resources and opportunities.
-
-Therefore, before you decide to download CBT Nuggets CCNA Security courses for free, you should weigh the pros and cons carefully and consider the ethical and legal implications of your actions. You should also respect the hard work and effort of the CBT Nuggets team and instructors who create these high-quality courses for your benefit.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git "a/spaces/tioseFevbu/cartoon-converter/scripts/Cyclist Who Dopes Agrees To Stick Spanner In\302\240spokes VERIFIED.md" "b/spaces/tioseFevbu/cartoon-converter/scripts/Cyclist Who Dopes Agrees To Stick Spanner In\302\240spokes VERIFIED.md"
deleted file mode 100644
index a87b75d64946052eb995b843808aeaf3edac379f..0000000000000000000000000000000000000000
--- "a/spaces/tioseFevbu/cartoon-converter/scripts/Cyclist Who Dopes Agrees To Stick Spanner In\302\240spokes VERIFIED.md"
+++ /dev/null
@@ -1,24 +0,0 @@
-
-Cyclist who dopes agrees to stick spanner in spokes
-A professional cyclist who admitted to using performance-enhancing drugs has agreed to a unusual punishment: he will have to stick a spanner in his spokes during his next race.
-cyclist who dopes agrees to stick spanner in spokes DOWNLOAD ○ https://urlcod.com/2uHyTJ
-The cyclist, who asked to remain anonymous, said he was caught by a random drug test after winning a stage of the Tour de France. He confessed to using erythropoietin (EPO), a hormone that boosts red blood cell production and oxygen delivery.
-"I know I made a mistake and I regret it. I wanted to win so badly that I was willing to cheat. But I also want to make amends and show that I respect the sport and the rules," he said.
-As part of his plea deal, he agreed to participate in an anti-doping campaign and to sabotage his own bike during his next race. He will have to insert a spanner between his spokes, causing his wheel to jam and his bike to crash.
-"It will be painful and humiliating, but I think it's fair. I hope it will deter other cyclists from doping and send a message that cheating is not worth it," he said.
-The cycling federation welcomed the cyclist's cooperation and said it was a "creative and effective" way of enforcing the anti-doping policy.
-"We applaud the cyclist for his honesty and courage. We believe this punishment will serve as a strong deterrent and a reminder of the values of fair play and sportsmanship," a spokesperson said.
-
-
-The cyclist's punishment has sparked mixed reactions from the public and the cycling community. Some praised him for his honesty and willingness to face the consequences, while others criticized him for tarnishing the sport and endangering himself and others.
-"I think it's a brave and noble gesture. He could have lied or denied it, but he chose to come clean and take responsibility. I hope he can learn from this and move on," said a fan.
-"I think it's a stupid and reckless stunt. He cheated and he should be banned for life. He doesn't deserve to race again, let alone risk his life and the lives of other cyclists. He should be ashamed of himself," said a rival.
-The cyclist said he understood the criticism and accepted the risk of his punishment. He said he hoped to redeem himself and regain the trust of his fans and fellow cyclists.
-"I know I have lost a lot of respect and credibility. I know I have to earn it back. I hope this will be a first step towards that. I love cycling and I want to do it the right way," he said.
-
-Doping is a serious problem in cycling and other sports. According to the World Anti-Doping Agency (WADA), more than 300 cyclists have been sanctioned for doping violations since 2000. EPO is one of the most common and effective substances used by cyclists to enhance their performance.
-"EPO is a synthetic version of a natural hormone that stimulates the production of red blood cells. It can increase the oxygen-carrying capacity of the blood by up to 50%. This can give a significant advantage to cyclists, especially in endurance events like the Tour de France," said Dr. John Smith, a sports medicine expert.
-However, EPO also has serious health risks and side effects. It can cause blood clots, strokes, heart attacks, and kidney damage. It can also be detected by blood tests and urine tests.
-"EPO is not only cheating, it's also dangerous. It can harm the health of the athletes and put them at risk of death. It can also be easily detected by the anti-doping tests. It's not worth it," said Dr. Smith.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Explaindio Video Creator 3.0 Crack Full Version Free Download LINK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Explaindio Video Creator 3.0 Crack Full Version Free Download LINK.md
deleted file mode 100644
index 6208d68f1589af7a8855ca534242884c26b64cd4..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Explaindio Video Creator 3.0 Crack Full Version Free Download LINK.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-How to Create Amazing Animated Videos with Explaindio Video Creator 3.0 Crack
-If you are looking for a software that can help you create professional-looking animated videos in minutes, you might want to check out Explaindio Video Creator 3.0 Crack. This software is a powerful tool that allows you to combine 2D and 3D animations, doodle sketches, and live action videos with ease. You can also add transitions, effects, and text animations to make your videos more engaging and attractive.
-In this article, we will show you how to download and install Explaindio Video Creator 3.0 Crack for free, and how to use it to create stunning videos for your business, marketing, or education purposes.
-Explaindio Video Creator 3.0 Crack Full Version Free Download Download Zip > https://urlcod.com/2uHwlm
-How to Download and Install Explaindio Video Creator 3.0 Crack
-Explaindio Video Creator 3.0 Crack is a full version of the software that has been cracked by hackers. This means that you can use it without paying for the license or activation code. However, this also means that you are using an illegal and potentially harmful software that may contain viruses, malware, or spyware. Therefore, we do not recommend or endorse using Explaindio Video Creator 3.0 Crack at all.
-If you still want to download and install Explaindio Video Creator 3.0 Crack at your own risk, here are the steps you need to follow:
-
-Turn off your internet connection and antivirus software.
-Download the Explaindio Video Creator 3.0 Crack file from a reliable source.
-Extract the file using WinRAR or any other file compression software.
-Run the setup.exe file and follow the installation wizard.
-Copy the patch file from the crack folder and paste it into the installation directory of Explaindio Video Creator 3.0.
-Run the patch file as administrator and click on the patch button.
-Block the software from accessing the internet using your firewall.
-
-Congratulations, you have successfully installed Explaindio Video Creator 3.0 Crack on your computer. Now you can start creating amazing animated videos with it.
-How to Use Explaindio Video Creator 3.0 Crack
-Explaindio Video Creator 3.0 Crack is a user-friendly software that has a simple and intuitive interface. You can easily create animated videos by following these steps:
-
-Launch Explaindio Video Creator 3.0 Crack and choose a project type: sketch video, animation video, or live action video.
-Add scenes to your project by dragging and dropping them from the library or importing them from your computer.
-Edit each scene by adding images, videos, text, audio, animations, effects, and transitions.
-Preview your project and make any adjustments as needed.
-Export your project as an MP4 video file or upload it directly to YouTube or Facebook.
-
-That's it! You have just created a stunning animated video with Explaindio Video Creator 3.0 Crack. You can use it for any purpose you want, such as promoting your products or services, teaching your students, or entertaining your audience.
-
-Conclusion
-Explaindio Video Creator 3.0 Crack is a powerful software that can help you create professional-looking animated videos in minutes. However, it is also an illegal and potentially harmful software that may cause damage to your computer or violate the intellectual property rights of the original developers. Therefore, we strongly advise you not to use Explaindio Video Creator 3.0 Crack at all.
-If you want to use a legitimate and safe software that can offer you similar features and benefits as Explaindio Video Creator 3.0 Crack, we recommend you to try out Explaindio Video Creator , which is the official version of the software that you can purchase for a reasonable price. You will also get access to updates, support, and tutorials that will help you create amazing animated videos with ease.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py
deleted file mode 100644
index a7b16e2e0528b9852b517171f0afbd578104f13b..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/vcs/bazaar.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import logging
-from typing import List, Optional, Tuple
-
-from pip._internal.utils.misc import HiddenText, display_path
-from pip._internal.utils.subprocess import make_command
-from pip._internal.utils.urls import path_to_url
-from pip._internal.vcs.versioncontrol import (
- AuthInfo,
- RemoteNotFoundError,
- RevOptions,
- VersionControl,
- vcs,
-)
-
-logger = logging.getLogger(__name__)
-
-
-class Bazaar(VersionControl):
- name = "bzr"
- dirname = ".bzr"
- repo_name = "branch"
- schemes = (
- "bzr+http",
- "bzr+https",
- "bzr+ssh",
- "bzr+sftp",
- "bzr+ftp",
- "bzr+lp",
- "bzr+file",
- )
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- return ["-r", rev]
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- rev_display = rev_options.to_display()
- logger.info(
- "Checking out %s%s to %s",
- url,
- rev_display,
- display_path(dest),
- )
- if verbosity <= 0:
- flag = "--quiet"
- elif verbosity == 1:
- flag = ""
- else:
- flag = f"-{'v'*verbosity}"
- cmd_args = make_command("branch", flag, rev_options.to_args(), url, dest)
- self.run_command(cmd_args)
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- self.run_command(make_command("switch", url), cwd=dest)
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- cmd_args = make_command("pull", "-q", rev_options.to_args())
- self.run_command(cmd_args, cwd=dest)
-
- @classmethod
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
- # hotfix the URL scheme after removing bzr+ from bzr+ssh:// readd it
- url, rev, user_pass = super().get_url_rev_and_auth(url)
- if url.startswith("ssh://"):
- url = "bzr+" + url
- return url, rev, user_pass
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- urls = cls.run_command(
- ["info"], show_stdout=False, stdout_only=True, cwd=location
- )
- for line in urls.splitlines():
- line = line.strip()
- for x in ("checkout of branch: ", "parent branch: "):
- if line.startswith(x):
- repo = line.split(x)[1]
- if cls._is_local_repository(repo):
- return path_to_url(repo)
- return repo
- raise RemoteNotFoundError
-
- @classmethod
- def get_revision(cls, location: str) -> str:
- revision = cls.run_command(
- ["revno"],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- )
- return revision.splitlines()[-1]
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """Always assume the versions don't match"""
- return False
-
-
-vcs.register(Bazaar)
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py
deleted file mode 100644
index 82e3684daaa911dcb542cce847b8354e242385da..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py
+++ /dev/null
@@ -1,1286 +0,0 @@
-"""distutils.dist
-
-Provides the Distribution class, which represents the module distribution
-being built/installed/distributed.
-"""
-
-import sys
-import os
-import re
-from email import message_from_file
-
-try:
- import warnings
-except ImportError:
- warnings = None
-
-from distutils.errors import *
-from distutils.fancy_getopt import FancyGetopt, translate_longopt
-from distutils.util import check_environ, strtobool, rfc822_escape
-from distutils import log
-from distutils.debug import DEBUG
-
-# Regex to define acceptable Distutils command names. This is not *quite*
-# the same as a Python NAME -- I don't allow leading underscores. The fact
-# that they're very similar is no coincidence; the default naming scheme is
-# to look for a Python module named after the command.
-command_re = re.compile(r'^[a-zA-Z]([a-zA-Z0-9_]*)$')
-
-
-def _ensure_list(value, fieldname):
- if isinstance(value, str):
- # a string containing comma separated values is okay. It will
- # be converted to a list by Distribution.finalize_options().
- pass
- elif not isinstance(value, list):
- # passing a tuple or an iterator perhaps, warn and convert
- typename = type(value).__name__
- msg = "Warning: '{fieldname}' should be a list, got type '{typename}'"
- msg = msg.format(**locals())
- log.log(log.WARN, msg)
- value = list(value)
- return value
-
-
-class Distribution:
- """The core of the Distutils. Most of the work hiding behind 'setup'
- is really done within a Distribution instance, which farms the work out
- to the Distutils commands specified on the command line.
-
- Setup scripts will almost never instantiate Distribution directly,
- unless the 'setup()' function is totally inadequate to their needs.
- However, it is conceivable that a setup script might wish to subclass
- Distribution for some specialized purpose, and then pass the subclass
- to 'setup()' as the 'distclass' keyword argument. If so, it is
- necessary to respect the expectations that 'setup' has of Distribution.
- See the code for 'setup()', in core.py, for details.
- """
-
- # 'global_options' describes the command-line options that may be
- # supplied to the setup script prior to any actual commands.
- # Eg. "./setup.py -n" or "./setup.py --quiet" both take advantage of
- # these global options. This list should be kept to a bare minimum,
- # since every global option is also valid as a command option -- and we
- # don't want to pollute the commands with too many options that they
- # have minimal control over.
- # The fourth entry for verbose means that it can be repeated.
- global_options = [
- ('verbose', 'v', "run verbosely (default)", 1),
- ('quiet', 'q', "run quietly (turns verbosity off)"),
- ('dry-run', 'n', "don't actually do anything"),
- ('help', 'h', "show detailed help message"),
- ('no-user-cfg', None, 'ignore pydistutils.cfg in your home directory'),
- ]
-
- # 'common_usage' is a short (2-3 line) string describing the common
- # usage of the setup script.
- common_usage = """\
-Common commands: (see '--help-commands' for more)
-
- setup.py build will build the package underneath 'build/'
- setup.py install will install the package
-"""
-
- # options that are not propagated to the commands
- display_options = [
- ('help-commands', None, "list all available commands"),
- ('name', None, "print package name"),
- ('version', 'V', "print package version"),
- ('fullname', None, "print -"),
- ('author', None, "print the author's name"),
- ('author-email', None, "print the author's email address"),
- ('maintainer', None, "print the maintainer's name"),
- ('maintainer-email', None, "print the maintainer's email address"),
- ('contact', None, "print the maintainer's name if known, else the author's"),
- (
- 'contact-email',
- None,
- "print the maintainer's email address if known, else the author's",
- ),
- ('url', None, "print the URL for this package"),
- ('license', None, "print the license of the package"),
- ('licence', None, "alias for --license"),
- ('description', None, "print the package description"),
- ('long-description', None, "print the long package description"),
- ('platforms', None, "print the list of platforms"),
- ('classifiers', None, "print the list of classifiers"),
- ('keywords', None, "print the list of keywords"),
- ('provides', None, "print the list of packages/modules provided"),
- ('requires', None, "print the list of packages/modules required"),
- ('obsoletes', None, "print the list of packages/modules made obsolete"),
- ]
- display_option_names = [translate_longopt(x[0]) for x in display_options]
-
- # negative options are options that exclude other options
- negative_opt = {'quiet': 'verbose'}
-
- # -- Creation/initialization methods -------------------------------
-
- def __init__(self, attrs=None):
- """Construct a new Distribution instance: initialize all the
- attributes of a Distribution, and then use 'attrs' (a dictionary
- mapping attribute names to values) to assign some of those
- attributes their "real" values. (Any attributes not mentioned in
- 'attrs' will be assigned to some null value: 0, None, an empty list
- or dictionary, etc.) Most importantly, initialize the
- 'command_obj' attribute to the empty dictionary; this will be
- filled in with real command objects by 'parse_command_line()'.
- """
-
- # Default values for our command-line options
- self.verbose = 1
- self.dry_run = 0
- self.help = 0
- for attr in self.display_option_names:
- setattr(self, attr, 0)
-
- # Store the distribution meta-data (name, version, author, and so
- # forth) in a separate object -- we're getting to have enough
- # information here (and enough command-line options) that it's
- # worth it. Also delegate 'get_XXX()' methods to the 'metadata'
- # object in a sneaky and underhanded (but efficient!) way.
- self.metadata = DistributionMetadata()
- for basename in self.metadata._METHOD_BASENAMES:
- method_name = "get_" + basename
- setattr(self, method_name, getattr(self.metadata, method_name))
-
- # 'cmdclass' maps command names to class objects, so we
- # can 1) quickly figure out which class to instantiate when
- # we need to create a new command object, and 2) have a way
- # for the setup script to override command classes
- self.cmdclass = {}
-
- # 'command_packages' is a list of packages in which commands
- # are searched for. The factory for command 'foo' is expected
- # to be named 'foo' in the module 'foo' in one of the packages
- # named here. This list is searched from the left; an error
- # is raised if no named package provides the command being
- # searched for. (Always access using get_command_packages().)
- self.command_packages = None
-
- # 'script_name' and 'script_args' are usually set to sys.argv[0]
- # and sys.argv[1:], but they can be overridden when the caller is
- # not necessarily a setup script run from the command-line.
- self.script_name = None
- self.script_args = None
-
- # 'command_options' is where we store command options between
- # parsing them (from config files, the command-line, etc.) and when
- # they are actually needed -- ie. when the command in question is
- # instantiated. It is a dictionary of dictionaries of 2-tuples:
- # command_options = { command_name : { option : (source, value) } }
- self.command_options = {}
-
- # 'dist_files' is the list of (command, pyversion, file) that
- # have been created by any dist commands run so far. This is
- # filled regardless of whether the run is dry or not. pyversion
- # gives sysconfig.get_python_version() if the dist file is
- # specific to a Python version, 'any' if it is good for all
- # Python versions on the target platform, and '' for a source
- # file. pyversion should not be used to specify minimum or
- # maximum required Python versions; use the metainfo for that
- # instead.
- self.dist_files = []
-
- # These options are really the business of various commands, rather
- # than of the Distribution itself. We provide aliases for them in
- # Distribution as a convenience to the developer.
- self.packages = None
- self.package_data = {}
- self.package_dir = None
- self.py_modules = None
- self.libraries = None
- self.headers = None
- self.ext_modules = None
- self.ext_package = None
- self.include_dirs = None
- self.extra_path = None
- self.scripts = None
- self.data_files = None
- self.password = ''
-
- # And now initialize bookkeeping stuff that can't be supplied by
- # the caller at all. 'command_obj' maps command names to
- # Command instances -- that's how we enforce that every command
- # class is a singleton.
- self.command_obj = {}
-
- # 'have_run' maps command names to boolean values; it keeps track
- # of whether we have actually run a particular command, to make it
- # cheap to "run" a command whenever we think we might need to -- if
- # it's already been done, no need for expensive filesystem
- # operations, we just check the 'have_run' dictionary and carry on.
- # It's only safe to query 'have_run' for a command class that has
- # been instantiated -- a false value will be inserted when the
- # command object is created, and replaced with a true value when
- # the command is successfully run. Thus it's probably best to use
- # '.get()' rather than a straight lookup.
- self.have_run = {}
-
- # Now we'll use the attrs dictionary (ultimately, keyword args from
- # the setup script) to possibly override any or all of these
- # distribution options.
-
- if attrs:
- # Pull out the set of command options and work on them
- # specifically. Note that this order guarantees that aliased
- # command options will override any supplied redundantly
- # through the general options dictionary.
- options = attrs.get('options')
- if options is not None:
- del attrs['options']
- for (command, cmd_options) in options.items():
- opt_dict = self.get_option_dict(command)
- for (opt, val) in cmd_options.items():
- opt_dict[opt] = ("setup script", val)
-
- if 'licence' in attrs:
- attrs['license'] = attrs['licence']
- del attrs['licence']
- msg = "'licence' distribution option is deprecated; use 'license'"
- if warnings is not None:
- warnings.warn(msg)
- else:
- sys.stderr.write(msg + "\n")
-
- # Now work on the rest of the attributes. Any attribute that's
- # not already defined is invalid!
- for (key, val) in attrs.items():
- if hasattr(self.metadata, "set_" + key):
- getattr(self.metadata, "set_" + key)(val)
- elif hasattr(self.metadata, key):
- setattr(self.metadata, key, val)
- elif hasattr(self, key):
- setattr(self, key, val)
- else:
- msg = "Unknown distribution option: %s" % repr(key)
- warnings.warn(msg)
-
- # no-user-cfg is handled before other command line args
- # because other args override the config files, and this
- # one is needed before we can load the config files.
- # If attrs['script_args'] wasn't passed, assume false.
- #
- # This also make sure we just look at the global options
- self.want_user_cfg = True
-
- if self.script_args is not None:
- for arg in self.script_args:
- if not arg.startswith('-'):
- break
- if arg == '--no-user-cfg':
- self.want_user_cfg = False
- break
-
- self.finalize_options()
-
- def get_option_dict(self, command):
- """Get the option dictionary for a given command. If that
- command's option dictionary hasn't been created yet, then create it
- and return the new dictionary; otherwise, return the existing
- option dictionary.
- """
- dict = self.command_options.get(command)
- if dict is None:
- dict = self.command_options[command] = {}
- return dict
-
- def dump_option_dicts(self, header=None, commands=None, indent=""):
- from pprint import pformat
-
- if commands is None: # dump all command option dicts
- commands = sorted(self.command_options.keys())
-
- if header is not None:
- self.announce(indent + header)
- indent = indent + " "
-
- if not commands:
- self.announce(indent + "no commands known yet")
- return
-
- for cmd_name in commands:
- opt_dict = self.command_options.get(cmd_name)
- if opt_dict is None:
- self.announce(indent + "no option dict for '%s' command" % cmd_name)
- else:
- self.announce(indent + "option dict for '%s' command:" % cmd_name)
- out = pformat(opt_dict)
- for line in out.split('\n'):
- self.announce(indent + " " + line)
-
- # -- Config file finding/parsing methods ---------------------------
-
- def find_config_files(self):
- """Find as many configuration files as should be processed for this
- platform, and return a list of filenames in the order in which they
- should be parsed. The filenames returned are guaranteed to exist
- (modulo nasty race conditions).
-
- There are three possible config files: distutils.cfg in the
- Distutils installation directory (ie. where the top-level
- Distutils __inst__.py file lives), a file in the user's home
- directory named .pydistutils.cfg on Unix and pydistutils.cfg
- on Windows/Mac; and setup.cfg in the current directory.
-
- The file in the user's home directory can be disabled with the
- --no-user-cfg option.
- """
- files = []
- check_environ()
-
- # Where to look for the system-wide Distutils config file
- sys_dir = os.path.dirname(sys.modules['distutils'].__file__)
-
- # Look for the system config file
- sys_file = os.path.join(sys_dir, "distutils.cfg")
- if os.path.isfile(sys_file):
- files.append(sys_file)
-
- # What to call the per-user config file
- if os.name == 'posix':
- user_filename = ".pydistutils.cfg"
- else:
- user_filename = "pydistutils.cfg"
-
- # And look for the user config file
- if self.want_user_cfg:
- user_file = os.path.join(os.path.expanduser('~'), user_filename)
- if os.path.isfile(user_file):
- files.append(user_file)
-
- # All platforms support local setup.cfg
- local_file = "setup.cfg"
- if os.path.isfile(local_file):
- files.append(local_file)
-
- if DEBUG:
- self.announce("using config files: %s" % ', '.join(files))
-
- return files
-
- def parse_config_files(self, filenames=None):
- from configparser import ConfigParser
-
- # Ignore install directory options if we have a venv
- if sys.prefix != sys.base_prefix:
- ignore_options = [
- 'install-base',
- 'install-platbase',
- 'install-lib',
- 'install-platlib',
- 'install-purelib',
- 'install-headers',
- 'install-scripts',
- 'install-data',
- 'prefix',
- 'exec-prefix',
- 'home',
- 'user',
- 'root',
- ]
- else:
- ignore_options = []
-
- ignore_options = frozenset(ignore_options)
-
- if filenames is None:
- filenames = self.find_config_files()
-
- if DEBUG:
- self.announce("Distribution.parse_config_files():")
-
- parser = ConfigParser()
- for filename in filenames:
- if DEBUG:
- self.announce(" reading %s" % filename)
- parser.read(filename)
- for section in parser.sections():
- options = parser.options(section)
- opt_dict = self.get_option_dict(section)
-
- for opt in options:
- if opt != '__name__' and opt not in ignore_options:
- val = parser.get(section, opt)
- opt = opt.replace('-', '_')
- opt_dict[opt] = (filename, val)
-
- # Make the ConfigParser forget everything (so we retain
- # the original filenames that options come from)
- parser.__init__()
-
- # If there was a "global" section in the config file, use it
- # to set Distribution options.
-
- if 'global' in self.command_options:
- for (opt, (src, val)) in self.command_options['global'].items():
- alias = self.negative_opt.get(opt)
- try:
- if alias:
- setattr(self, alias, not strtobool(val))
- elif opt in ('verbose', 'dry_run'): # ugh!
- setattr(self, opt, strtobool(val))
- else:
- setattr(self, opt, val)
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- # -- Command-line parsing methods ----------------------------------
-
- def parse_command_line(self):
- """Parse the setup script's command line, taken from the
- 'script_args' instance attribute (which defaults to 'sys.argv[1:]'
- -- see 'setup()' in core.py). This list is first processed for
- "global options" -- options that set attributes of the Distribution
- instance. Then, it is alternately scanned for Distutils commands
- and options for that command. Each new command terminates the
- options for the previous command. The allowed options for a
- command are determined by the 'user_options' attribute of the
- command class -- thus, we have to be able to load command classes
- in order to parse the command line. Any error in that 'options'
- attribute raises DistutilsGetoptError; any error on the
- command-line raises DistutilsArgError. If no Distutils commands
- were found on the command line, raises DistutilsArgError. Return
- true if command-line was successfully parsed and we should carry
- on with executing commands; false if no errors but we shouldn't
- execute commands (currently, this only happens if user asks for
- help).
- """
- #
- # We now have enough information to show the Macintosh dialog
- # that allows the user to interactively specify the "command line".
- #
- toplevel_options = self._get_toplevel_options()
-
- # We have to parse the command line a bit at a time -- global
- # options, then the first command, then its options, and so on --
- # because each command will be handled by a different class, and
- # the options that are valid for a particular class aren't known
- # until we have loaded the command class, which doesn't happen
- # until we know what the command is.
-
- self.commands = []
- parser = FancyGetopt(toplevel_options + self.display_options)
- parser.set_negative_aliases(self.negative_opt)
- parser.set_aliases({'licence': 'license'})
- args = parser.getopt(args=self.script_args, object=self)
- option_order = parser.get_option_order()
- log.set_verbosity(self.verbose)
-
- # for display options we return immediately
- if self.handle_display_options(option_order):
- return
- while args:
- args = self._parse_command_opts(parser, args)
- if args is None: # user asked for help (and got it)
- return
-
- # Handle the cases of --help as a "global" option, ie.
- # "setup.py --help" and "setup.py --help command ...". For the
- # former, we show global options (--verbose, --dry-run, etc.)
- # and display-only options (--name, --version, etc.); for the
- # latter, we omit the display-only options and show help for
- # each command listed on the command line.
- if self.help:
- self._show_help(
- parser, display_options=len(self.commands) == 0, commands=self.commands
- )
- return
-
- # Oops, no commands found -- an end-user error
- if not self.commands:
- raise DistutilsArgError("no commands supplied")
-
- # All is well: return true
- return True
-
- def _get_toplevel_options(self):
- """Return the non-display options recognized at the top level.
-
- This includes options that are recognized *only* at the top
- level as well as options recognized for commands.
- """
- return self.global_options + [
- (
- "command-packages=",
- None,
- "list of packages that provide distutils commands",
- ),
- ]
-
- def _parse_command_opts(self, parser, args):
- """Parse the command-line options for a single command.
- 'parser' must be a FancyGetopt instance; 'args' must be the list
- of arguments, starting with the current command (whose options
- we are about to parse). Returns a new version of 'args' with
- the next command at the front of the list; will be the empty
- list if there are no more commands on the command line. Returns
- None if the user asked for help on this command.
- """
- # late import because of mutual dependence between these modules
- from distutils.cmd import Command
-
- # Pull the current command from the head of the command line
- command = args[0]
- if not command_re.match(command):
- raise SystemExit("invalid command name '%s'" % command)
- self.commands.append(command)
-
- # Dig up the command class that implements this command, so we
- # 1) know that it's a valid command, and 2) know which options
- # it takes.
- try:
- cmd_class = self.get_command_class(command)
- except DistutilsModuleError as msg:
- raise DistutilsArgError(msg)
-
- # Require that the command class be derived from Command -- want
- # to be sure that the basic "command" interface is implemented.
- if not issubclass(cmd_class, Command):
- raise DistutilsClassError(
- "command class %s must subclass Command" % cmd_class
- )
-
- # Also make sure that the command object provides a list of its
- # known options.
- if not (
- hasattr(cmd_class, 'user_options')
- and isinstance(cmd_class.user_options, list)
- ):
- msg = (
- "command class %s must provide "
- "'user_options' attribute (a list of tuples)"
- )
- raise DistutilsClassError(msg % cmd_class)
-
- # If the command class has a list of negative alias options,
- # merge it in with the global negative aliases.
- negative_opt = self.negative_opt
- if hasattr(cmd_class, 'negative_opt'):
- negative_opt = negative_opt.copy()
- negative_opt.update(cmd_class.negative_opt)
-
- # Check for help_options in command class. They have a different
- # format (tuple of four) so we need to preprocess them here.
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_options = fix_help_options(cmd_class.help_options)
- else:
- help_options = []
-
- # All commands support the global options too, just by adding
- # in 'global_options'.
- parser.set_option_table(
- self.global_options + cmd_class.user_options + help_options
- )
- parser.set_negative_aliases(negative_opt)
- (args, opts) = parser.getopt(args[1:])
- if hasattr(opts, 'help') and opts.help:
- self._show_help(parser, display_options=0, commands=[cmd_class])
- return
-
- if hasattr(cmd_class, 'help_options') and isinstance(
- cmd_class.help_options, list
- ):
- help_option_found = 0
- for (help_option, short, desc, func) in cmd_class.help_options:
- if hasattr(opts, parser.get_attr_name(help_option)):
- help_option_found = 1
- if callable(func):
- func()
- else:
- raise DistutilsClassError(
- "invalid help function %r for help option '%s': "
- "must be a callable object (function, etc.)"
- % (func, help_option)
- )
-
- if help_option_found:
- return
-
- # Put the options from the command-line into their official
- # holding pen, the 'command_options' dictionary.
- opt_dict = self.get_option_dict(command)
- for (name, value) in vars(opts).items():
- opt_dict[name] = ("command line", value)
-
- return args
-
- def finalize_options(self):
- """Set final values for all the options on the Distribution
- instance, analogous to the .finalize_options() method of Command
- objects.
- """
- for attr in ('keywords', 'platforms'):
- value = getattr(self.metadata, attr)
- if value is None:
- continue
- if isinstance(value, str):
- value = [elm.strip() for elm in value.split(',')]
- setattr(self.metadata, attr, value)
-
- def _show_help(self, parser, global_options=1, display_options=1, commands=[]):
- """Show help for the setup script command-line in the form of
- several lists of command-line options. 'parser' should be a
- FancyGetopt instance; do not expect it to be returned in the
- same state, as its option table will be reset to make it
- generate the correct help text.
-
- If 'global_options' is true, lists the global options:
- --verbose, --dry-run, etc. If 'display_options' is true, lists
- the "display-only" options: --name, --version, etc. Finally,
- lists per-command help for every command name or command class
- in 'commands'.
- """
- # late import because of mutual dependence between these modules
- from distutils.core import gen_usage
- from distutils.cmd import Command
-
- if global_options:
- if display_options:
- options = self._get_toplevel_options()
- else:
- options = self.global_options
- parser.set_option_table(options)
- parser.print_help(self.common_usage + "\nGlobal options:")
- print('')
-
- if display_options:
- parser.set_option_table(self.display_options)
- parser.print_help(
- "Information display options (just display "
- + "information, ignore any commands)"
- )
- print('')
-
- for command in self.commands:
- if isinstance(command, type) and issubclass(command, Command):
- klass = command
- else:
- klass = self.get_command_class(command)
- if hasattr(klass, 'help_options') and isinstance(klass.help_options, list):
- parser.set_option_table(
- klass.user_options + fix_help_options(klass.help_options)
- )
- else:
- parser.set_option_table(klass.user_options)
- parser.print_help("Options for '%s' command:" % klass.__name__)
- print('')
-
- print(gen_usage(self.script_name))
-
- def handle_display_options(self, option_order):
- """If there were any non-global "display-only" options
- (--help-commands or the metadata display options) on the command
- line, display the requested info and return true; else return
- false.
- """
- from distutils.core import gen_usage
-
- # User just wants a list of commands -- we'll print it out and stop
- # processing now (ie. if they ran "setup --help-commands foo bar",
- # we ignore "foo bar").
- if self.help_commands:
- self.print_commands()
- print('')
- print(gen_usage(self.script_name))
- return 1
-
- # If user supplied any of the "display metadata" options, then
- # display that metadata in the order in which the user supplied the
- # metadata options.
- any_display_options = 0
- is_display_option = {}
- for option in self.display_options:
- is_display_option[option[0]] = 1
-
- for (opt, val) in option_order:
- if val and is_display_option.get(opt):
- opt = translate_longopt(opt)
- value = getattr(self.metadata, "get_" + opt)()
- if opt in ['keywords', 'platforms']:
- print(','.join(value))
- elif opt in ('classifiers', 'provides', 'requires', 'obsoletes'):
- print('\n'.join(value))
- else:
- print(value)
- any_display_options = 1
-
- return any_display_options
-
- def print_command_list(self, commands, header, max_length):
- """Print a subset of the list of all commands -- used by
- 'print_commands()'.
- """
- print(header + ":")
-
- for cmd in commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
-
- print(" %-*s %s" % (max_length, cmd, description))
-
- def print_commands(self):
- """Print out a help message listing all available commands with a
- description of each. The list is divided into "standard commands"
- (listed in distutils.command.__all__) and "extra commands"
- (mentioned in self.cmdclass, but not a standard command). The
- descriptions come from the command class attribute
- 'description'.
- """
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- max_length = 0
- for cmd in std_commands + extra_commands:
- if len(cmd) > max_length:
- max_length = len(cmd)
-
- self.print_command_list(std_commands, "Standard commands", max_length)
- if extra_commands:
- print()
- self.print_command_list(extra_commands, "Extra commands", max_length)
-
- def get_command_list(self):
- """Get a list of (command, description) tuples.
- The list is divided into "standard commands" (listed in
- distutils.command.__all__) and "extra commands" (mentioned in
- self.cmdclass, but not a standard command). The descriptions come
- from the command class attribute 'description'.
- """
- # Currently this is only used on Mac OS, for the Mac-only GUI
- # Distutils interface (by Jack Jansen)
- import distutils.command
-
- std_commands = distutils.command.__all__
- is_std = {}
- for cmd in std_commands:
- is_std[cmd] = 1
-
- extra_commands = []
- for cmd in self.cmdclass.keys():
- if not is_std.get(cmd):
- extra_commands.append(cmd)
-
- rv = []
- for cmd in std_commands + extra_commands:
- klass = self.cmdclass.get(cmd)
- if not klass:
- klass = self.get_command_class(cmd)
- try:
- description = klass.description
- except AttributeError:
- description = "(no description available)"
- rv.append((cmd, description))
- return rv
-
- # -- Command class/object methods ----------------------------------
-
- def get_command_packages(self):
- """Return a list of packages from which commands are loaded."""
- pkgs = self.command_packages
- if not isinstance(pkgs, list):
- if pkgs is None:
- pkgs = ''
- pkgs = [pkg.strip() for pkg in pkgs.split(',') if pkg != '']
- if "distutils.command" not in pkgs:
- pkgs.insert(0, "distutils.command")
- self.command_packages = pkgs
- return pkgs
-
- def get_command_class(self, command):
- """Return the class that implements the Distutils command named by
- 'command'. First we check the 'cmdclass' dictionary; if the
- command is mentioned there, we fetch the class object from the
- dictionary and return it. Otherwise we load the command module
- ("distutils.command." + command) and fetch the command class from
- the module. The loaded class is also stored in 'cmdclass'
- to speed future calls to 'get_command_class()'.
-
- Raises DistutilsModuleError if the expected module could not be
- found, or if that module does not define the expected class.
- """
- klass = self.cmdclass.get(command)
- if klass:
- return klass
-
- for pkgname in self.get_command_packages():
- module_name = "%s.%s" % (pkgname, command)
- klass_name = command
-
- try:
- __import__(module_name)
- module = sys.modules[module_name]
- except ImportError:
- continue
-
- try:
- klass = getattr(module, klass_name)
- except AttributeError:
- raise DistutilsModuleError(
- "invalid command '%s' (no class '%s' in module '%s')"
- % (command, klass_name, module_name)
- )
-
- self.cmdclass[command] = klass
- return klass
-
- raise DistutilsModuleError("invalid command '%s'" % command)
-
- def get_command_obj(self, command, create=1):
- """Return the command object for 'command'. Normally this object
- is cached on a previous call to 'get_command_obj()'; if no command
- object for 'command' is in the cache, then we either create and
- return it (if 'create' is true) or return None.
- """
- cmd_obj = self.command_obj.get(command)
- if not cmd_obj and create:
- if DEBUG:
- self.announce(
- "Distribution.get_command_obj(): "
- "creating '%s' command object" % command
- )
-
- klass = self.get_command_class(command)
- cmd_obj = self.command_obj[command] = klass(self)
- self.have_run[command] = 0
-
- # Set any options that were supplied in config files
- # or on the command line. (NB. support for error
- # reporting is lame here: any errors aren't reported
- # until 'finalize_options()' is called, which means
- # we won't report the source of the error.)
- options = self.command_options.get(command)
- if options:
- self._set_command_options(cmd_obj, options)
-
- return cmd_obj
-
- def _set_command_options(self, command_obj, option_dict=None):
- """Set the options for 'command_obj' from 'option_dict'. Basically
- this means copying elements of a dictionary ('option_dict') to
- attributes of an instance ('command').
-
- 'command_obj' must be a Command instance. If 'option_dict' is not
- supplied, uses the standard option dictionary for this command
- (from 'self.command_options').
- """
- command_name = command_obj.get_command_name()
- if option_dict is None:
- option_dict = self.get_option_dict(command_name)
-
- if DEBUG:
- self.announce(" setting options for '%s' command:" % command_name)
- for (option, (source, value)) in option_dict.items():
- if DEBUG:
- self.announce(" %s = %s (from %s)" % (option, value, source))
- try:
- bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
- except AttributeError:
- bool_opts = []
- try:
- neg_opt = command_obj.negative_opt
- except AttributeError:
- neg_opt = {}
-
- try:
- is_string = isinstance(value, str)
- if option in neg_opt and is_string:
- setattr(command_obj, neg_opt[option], not strtobool(value))
- elif option in bool_opts and is_string:
- setattr(command_obj, option, strtobool(value))
- elif hasattr(command_obj, option):
- setattr(command_obj, option, value)
- else:
- raise DistutilsOptionError(
- "error in %s: command '%s' has no such option '%s'"
- % (source, command_name, option)
- )
- except ValueError as msg:
- raise DistutilsOptionError(msg)
-
- def reinitialize_command(self, command, reinit_subcommands=0):
- """Reinitializes a command to the state it was in when first
- returned by 'get_command_obj()': ie., initialized but not yet
- finalized. This provides the opportunity to sneak option
- values in programmatically, overriding or supplementing
- user-supplied values from the config files and command line.
- You'll have to re-finalize the command object (by calling
- 'finalize_options()' or 'ensure_finalized()') before using it for
- real.
-
- 'command' should be a command name (string) or command object. If
- 'reinit_subcommands' is true, also reinitializes the command's
- sub-commands, as declared by the 'sub_commands' class attribute (if
- it has one). See the "install" command for an example. Only
- reinitializes the sub-commands that actually matter, ie. those
- whose test predicates return true.
-
- Returns the reinitialized command object.
- """
- from distutils.cmd import Command
-
- if not isinstance(command, Command):
- command_name = command
- command = self.get_command_obj(command_name)
- else:
- command_name = command.get_command_name()
-
- if not command.finalized:
- return command
- command.initialize_options()
- command.finalized = 0
- self.have_run[command_name] = 0
- self._set_command_options(command)
-
- if reinit_subcommands:
- for sub in command.get_sub_commands():
- self.reinitialize_command(sub, reinit_subcommands)
-
- return command
-
- # -- Methods that operate on the Distribution ----------------------
-
- def announce(self, msg, level=log.INFO):
- log.log(level, msg)
-
- def run_commands(self):
- """Run each command that was seen on the setup script command line.
- Uses the list of commands found and cache of command objects
- created by 'get_command_obj()'.
- """
- for cmd in self.commands:
- self.run_command(cmd)
-
- # -- Methods that operate on its Commands --------------------------
-
- def run_command(self, command):
- """Do whatever it takes to run a command (including nothing at all,
- if the command has already been run). Specifically: if we have
- already created and run the command named by 'command', return
- silently without doing anything. If the command named by 'command'
- doesn't even have a command object yet, create one. Then invoke
- 'run()' on that command object (or an existing one).
- """
- # Already been here, done that? then return silently.
- if self.have_run.get(command):
- return
-
- log.info("running %s", command)
- cmd_obj = self.get_command_obj(command)
- cmd_obj.ensure_finalized()
- cmd_obj.run()
- self.have_run[command] = 1
-
- # -- Distribution query methods ------------------------------------
-
- def has_pure_modules(self):
- return len(self.packages or self.py_modules or []) > 0
-
- def has_ext_modules(self):
- return self.ext_modules and len(self.ext_modules) > 0
-
- def has_c_libraries(self):
- return self.libraries and len(self.libraries) > 0
-
- def has_modules(self):
- return self.has_pure_modules() or self.has_ext_modules()
-
- def has_headers(self):
- return self.headers and len(self.headers) > 0
-
- def has_scripts(self):
- return self.scripts and len(self.scripts) > 0
-
- def has_data_files(self):
- return self.data_files and len(self.data_files) > 0
-
- def is_pure(self):
- return (
- self.has_pure_modules()
- and not self.has_ext_modules()
- and not self.has_c_libraries()
- )
-
- # -- Metadata query methods ----------------------------------------
-
- # If you're looking for 'get_name()', 'get_version()', and so forth,
- # they are defined in a sneaky way: the constructor binds self.get_XXX
- # to self.metadata.get_XXX. The actual code is in the
- # DistributionMetadata class, below.
-
-
-class DistributionMetadata:
- """Dummy class to hold the distribution meta-data: name, version,
- author, and so forth.
- """
-
- _METHOD_BASENAMES = (
- "name",
- "version",
- "author",
- "author_email",
- "maintainer",
- "maintainer_email",
- "url",
- "license",
- "description",
- "long_description",
- "keywords",
- "platforms",
- "fullname",
- "contact",
- "contact_email",
- "classifiers",
- "download_url",
- # PEP 314
- "provides",
- "requires",
- "obsoletes",
- )
-
- def __init__(self, path=None):
- if path is not None:
- self.read_pkg_file(open(path))
- else:
- self.name = None
- self.version = None
- self.author = None
- self.author_email = None
- self.maintainer = None
- self.maintainer_email = None
- self.url = None
- self.license = None
- self.description = None
- self.long_description = None
- self.keywords = None
- self.platforms = None
- self.classifiers = None
- self.download_url = None
- # PEP 314
- self.provides = None
- self.requires = None
- self.obsoletes = None
-
- def read_pkg_file(self, file):
- """Reads the metadata values from a file object."""
- msg = message_from_file(file)
-
- def _read_field(name):
- value = msg[name]
- if value and value != "UNKNOWN":
- return value
-
- def _read_list(name):
- values = msg.get_all(name, None)
- if values == []:
- return None
- return values
-
- metadata_version = msg['metadata-version']
- self.name = _read_field('name')
- self.version = _read_field('version')
- self.description = _read_field('summary')
- # we are filling author only.
- self.author = _read_field('author')
- self.maintainer = None
- self.author_email = _read_field('author-email')
- self.maintainer_email = None
- self.url = _read_field('home-page')
- self.license = _read_field('license')
-
- if 'download-url' in msg:
- self.download_url = _read_field('download-url')
- else:
- self.download_url = None
-
- self.long_description = _read_field('description')
- self.description = _read_field('summary')
-
- if 'keywords' in msg:
- self.keywords = _read_field('keywords').split(',')
-
- self.platforms = _read_list('platform')
- self.classifiers = _read_list('classifier')
-
- # PEP 314 - these fields only exist in 1.1
- if metadata_version == '1.1':
- self.requires = _read_list('requires')
- self.provides = _read_list('provides')
- self.obsoletes = _read_list('obsoletes')
- else:
- self.requires = None
- self.provides = None
- self.obsoletes = None
-
- def write_pkg_info(self, base_dir):
- """Write the PKG-INFO file into the release tree."""
- with open(
- os.path.join(base_dir, 'PKG-INFO'), 'w', encoding='UTF-8'
- ) as pkg_info:
- self.write_pkg_file(pkg_info)
-
- def write_pkg_file(self, file):
- """Write the PKG-INFO format data to a file object."""
- version = '1.0'
- if (
- self.provides
- or self.requires
- or self.obsoletes
- or self.classifiers
- or self.download_url
- ):
- version = '1.1'
-
- # required fields
- file.write('Metadata-Version: %s\n' % version)
- file.write('Name: %s\n' % self.get_name())
- file.write('Version: %s\n' % self.get_version())
-
- def maybe_write(header, val):
- if val:
- file.write("{}: {}\n".format(header, val))
-
- # optional fields
- maybe_write("Summary", self.get_description())
- maybe_write("Home-page", self.get_url())
- maybe_write("Author", self.get_contact())
- maybe_write("Author-email", self.get_contact_email())
- maybe_write("License", self.get_license())
- maybe_write("Download-URL", self.download_url)
- maybe_write("Description", rfc822_escape(self.get_long_description() or ""))
- maybe_write("Keywords", ",".join(self.get_keywords()))
-
- self._write_list(file, 'Platform', self.get_platforms())
- self._write_list(file, 'Classifier', self.get_classifiers())
-
- # PEP 314
- self._write_list(file, 'Requires', self.get_requires())
- self._write_list(file, 'Provides', self.get_provides())
- self._write_list(file, 'Obsoletes', self.get_obsoletes())
-
- def _write_list(self, file, name, values):
- values = values or []
- for value in values:
- file.write('%s: %s\n' % (name, value))
-
- # -- Metadata query methods ----------------------------------------
-
- def get_name(self):
- return self.name or "UNKNOWN"
-
- def get_version(self):
- return self.version or "0.0.0"
-
- def get_fullname(self):
- return "%s-%s" % (self.get_name(), self.get_version())
-
- def get_author(self):
- return self.author
-
- def get_author_email(self):
- return self.author_email
-
- def get_maintainer(self):
- return self.maintainer
-
- def get_maintainer_email(self):
- return self.maintainer_email
-
- def get_contact(self):
- return self.maintainer or self.author
-
- def get_contact_email(self):
- return self.maintainer_email or self.author_email
-
- def get_url(self):
- return self.url
-
- def get_license(self):
- return self.license
-
- get_licence = get_license
-
- def get_description(self):
- return self.description
-
- def get_long_description(self):
- return self.long_description
-
- def get_keywords(self):
- return self.keywords or []
-
- def set_keywords(self, value):
- self.keywords = _ensure_list(value, 'keywords')
-
- def get_platforms(self):
- return self.platforms
-
- def set_platforms(self, value):
- self.platforms = _ensure_list(value, 'platforms')
-
- def get_classifiers(self):
- return self.classifiers or []
-
- def set_classifiers(self, value):
- self.classifiers = _ensure_list(value, 'classifiers')
-
- def get_download_url(self):
- return self.download_url
-
- # PEP 314
- def get_requires(self):
- return self.requires or []
-
- def set_requires(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.requires = list(value)
-
- def get_provides(self):
- return self.provides or []
-
- def set_provides(self, value):
- value = [v.strip() for v in value]
- for v in value:
- import distutils.versionpredicate
-
- distutils.versionpredicate.split_provision(v)
- self.provides = value
-
- def get_obsoletes(self):
- return self.obsoletes or []
-
- def set_obsoletes(self, value):
- import distutils.versionpredicate
-
- for v in value:
- distutils.versionpredicate.VersionPredicate(v)
- self.obsoletes = list(value)
-
-
-def fix_help_options(options):
- """Convert a 4-tuple 'help_options' list as found in various command
- classes to the 3-tuple form required by FancyGetopt.
- """
- new_options = []
- for help_tuple in options:
- new_options.append(help_tuple[0:3])
- return new_options
diff --git a/spaces/tomandandy/MusicGen3/audiocraft/data/audio.py b/spaces/tomandandy/MusicGen3/audiocraft/data/audio.py
deleted file mode 100644
index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000
--- a/spaces/tomandandy/MusicGen3/audiocraft/data/audio.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/wider_face/README.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/wider_face/README.md
deleted file mode 100644
index b8fe474257b69381dfb5656feffe3ad3389b25dd..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/wider_face/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# WIDER Face Dataset
-
-
-
-To use the WIDER Face dataset you need to download it
-and extract to the `data/WIDERFace` folder. Annotation in the VOC format
-can be found in this [repo](https://github.com/sovrasov/wider-face-pascal-voc-annotations.git).
-You should move the annotation files from `WIDER_train_annotations` and `WIDER_val_annotations` folders
-to the `Annotation` folders inside the corresponding directories `WIDER_train` and `WIDER_val`.
-Also annotation lists `val.txt` and `train.txt` should be copied to `data/WIDERFace` from `WIDER_train_annotations` and `WIDER_val_annotations`.
-The directory should be like this:
-
-```
-mmdetection
-├── mmdet
-├── tools
-├── configs
-├── data
-│ ├── WIDERFace
-│ │ ├── WIDER_train
-│ | │ ├──0--Parade
-│ | │ ├── ...
-│ | │ ├── Annotations
-│ │ ├── WIDER_val
-│ | │ ├──0--Parade
-│ | │ ├── ...
-│ | │ ├── Annotations
-│ │ ├── val.txt
-│ │ ├── train.txt
-
-```
-
-After that you can train the SSD300 on WIDER by launching training with the `ssd300_wider_face.py` config or
-create your own config based on the presented one.
-
-```
-@inproceedings{yang2016wider,
- Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
- Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- Title = {WIDER FACE: A Face Detection Benchmark},
- Year = {2016}
-}
-```
diff --git a/spaces/tomsoderlund/swedish-entity-recognition/README.md b/spaces/tomsoderlund/swedish-entity-recognition/README.md
deleted file mode 100644
index e8ff704934a00c7c387cd4ef4e27694276fc6cc5..0000000000000000000000000000000000000000
--- a/spaces/tomsoderlund/swedish-entity-recognition/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Swedish Entity Recognition
-emoji: 🇸🇪
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.12.0
-python_version: 3.9.13
-app_file: app.py
-pinned: false
-license: openrail
-models:
- - KBLab/bert-base-swedish-cased-ner
----
-
-# Swedish Entity Recognition
-
-## Installing locally
-
-Setup:
-
- source env/bin/activate
- pip3 install -r requirements.txt
-
-Then run:
-
- python3 app.py
-
-and test your Gradio app on: http://127.0.0.1:7860/
-
-## REST API via Gradio’s “Use via API” feature (see page footer)
-
- curl -X POST -H 'Content-type: application/json' --data '{ "data": ["Jag heter Tom och bor i Stockholm."] }' https://tomsoderlund-swedish-entity-recognition.hf.space/run/predict
diff --git a/spaces/training-transformers-together/calc/mem_calc.py b/spaces/training-transformers-together/calc/mem_calc.py
deleted file mode 100644
index d86bf9296631793ea5633a2b0c324d8ed26b6aea..0000000000000000000000000000000000000000
--- a/spaces/training-transformers-together/calc/mem_calc.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import argparse
-import math
-from models import models
-
-
-def get_GB(nbytes):
- return nbytes/(1024**3)
-
-
-def vocab(bsz, seqlen, dmodel, vocab_dim):
- # assumes tied embeddings
-
- w = vocab_dim*dmodel
- emb = seqlen*bsz*dmodel
- emb_norm = seqlen*bsz*dmodel
- pos_emb = seqlen*bsz*dmodel
- out_emb = seqlen*bsz*vocab_dim
- softmax_emb = seqlen*bsz*vocab_dim
-
- model = w + dmodel
- grad = emb + emb_norm + pos_emb + out_emb + softmax_emb
- grad *= 1
- return model, grad
-
-
-def transformer(bsz, seqlen, dmodel, nlayers, vocab_type, dhid=None,
- checkpoint=False, shared_groups=None):
- if dhid is None: dhid = 4*dmodel
- model = 0
- grad = 0
- for i in range(nlayers):
- m, g = transformer_layer(bsz, seqlen, dmodel, dhid, checkpoint=checkpoint)
- model += m
- grad += g
-
- if shared_groups is not None:
- model = model / nlayers * shared_groups
-
- m, g = vocab(bsz, seqlen, dmodel, vocab_type)
- model += m
- grad += g
-
- return model, grad
-
-def layer_norm(bsz, seqlen, dmodel):
- w = dmodel
- x_grad = bsz*seqlen*dmodel
- return w, x_grad
-
-
-def transformer_layer(bsz, seqlen, dmodel, dhid, checkpoint=False):
- model = 0
- grad = 0
-
- m, g = ffn(bsz, seqlen, dmodel, dhid, 'gelu')
- model += m
- grad += g*3
-
- m, g = attention_layer(bsz, seqlen, dmodel)
- model += m
- grad += g*5.0
-
- m, g = layer_norm(bsz, seqlen, dmodel)
- model += m
- grad += g*1.0
-
- if checkpoint:
- grad = bsz * seqlen * dmodel
-
- return model, grad
-
-def attention_layer(bsz, seqlen, dmodel):
- w_proj = dmodel*3*dmodel
- w_out = dmodel*dmodel
-
- x_residual = bsz*seqlen*dmodel
- x_proj = bsz*seqlen*dmodel*3
- #x_proj_contiguous = bsz*seqlen*dmodel*3
- x_proj_contiguous = 0
-
- x_qscaled = bsz*seqlen*dmodel
- x_qk = bsz*seqlen*seqlen*2 # we need to store both input sequence directions for gradient computation
- x_softmax = bsz*seqlen*seqlen
- x_softmax_v = bsz*seqlen*dmodel*2 # we need to store both input sequence directions for gradient computation
- #x_out_contiguous = bsz*seqlen*dmodel
- x_out_contiguous = 0
- x_out = bsz*seqlen*dmodel
-
- model = w_proj + w_out
- grad = x_residual + x_proj + x_proj_contiguous + x_qscaled + x_qk + x_softmax + x_softmax_v + x_out_contiguous + x_out
- return model, grad
-
-
-
-def ffn(bsz, seqlen, dmodel, dhid, func='relu'):
- # out = linear(relu(linear(x), inplace=True)) + x
- w1 = dmodel*dhid
- w2 = dhid*dmodel
- model = w1 + w2
- wgrad = model
- x1 = bsz*seqlen*dhid
- if func != 'relu': x1 *= 2 # inplace not possible with most other functions
- x2 = bsz*seqlen*dmodel
- residual = bsz*seqlen*dmodel
- grad = x1 + x2 + residual
-
- return model, grad
-
-
-OPTIMIZERS = ['adam', 'adafactor', 'adafactor-fac-only', '8-bit-adam', '16-bit-adam']
-
-
-def parse_args(args=None):
- parser = argparse.ArgumentParser('Memory calculator')
-
- parser.add_argument('--nlayers', type=int, help='The number of transformer layers.')
- parser.add_argument('--bsz', type=int, default=1, help='The batch size. Default: 2')
- parser.add_argument('--seqlen', type=int, help='The sequence length.')
- parser.add_argument('--dmodel', type=int, help='The core model size.')
- parser.add_argument('--dhid', type=int, default=None,
- help='The hidden size of the FFN layer. Default: 4x model size.')
- parser.add_argument('--fp16-level', type=str, default='O1',
- help='FP16-level to use. O0 = FP32; O1 = mixed-precision (16+32); O3 = fp16. Default: O1.')
- parser.add_argument('--model', default='', choices=list(models.keys()), help='Predefined NLP transformer models')
- parser.add_argument('--optimizer', default='adam', choices=OPTIMIZERS, help='The optimizer to use.')
- parser.add_argument('--vocab_size', type=int, default=None, help='The vocabulary to use.')
- parser.add_argument('--offload', action='store_true', help='Whether to use optimizer offload.')
- parser.add_argument('--ngpus', type=int, default=1, help='The number of gpus. Default: 1')
- parser.add_argument('--zero', type=int, default=0,
- help='The ZeRO level (1 optimizer, 2 optimizer+weights, 3 everything. Default: 1')
- parser.add_argument('--shared_groups', type=int, default=None, help='Number of shared layer groups (as in ALBERT). Defaults to no sharing.')
- parser.add_argument('--checkpoint', action='store_true', help='Use gradient checkpointing.')
-
- return parser.parse_args(args)
-
-
-def calculate_memory(args):
- if args.model != '':
- if args.model not in models:
- raise ValueError(f'{args.model} is not supported')
- else:
- for key, value in models[args.model].items():
- if getattr(args, key, None) is None:
- setattr(args, key, value)
-
- model, grad = transformer(args.bsz, args.seqlen, args.dmodel, args.nlayers, args.vocab_size, args.dhid, args.checkpoint, args.shared_groups)
- parameters = model
-
- if args.optimizer == 'adam':
- optim = 8*model
- elif args.optimizer == '8-bit-adam':
- optim = 2*model
- elif args.optimizer in ['16-bit-adam', 'adafactor']:
- optim = 4*model
- elif args.optimizer in ['adafactor-fac-only']:
- optim = math.log(model)
-
- if args.fp16_level == 'O0':
- # fp32 weights
- wgrad = 4*model
- model = 4*model
- grad = 4*grad # fp32
- elif args.fp16_level in ['O1', 'O2']:
- # fp16 weights + fp32 master weights
- wgrad = 2*model
- model = 4*model + (2*model)
- grad = 2*grad # fp16
- elif args.fp16_level == 'O3':
- wgrad = 2*model
- model = 2*model #fp16
- grad = 2*grad # fp32
-
- model = get_GB(model)
- grad = get_GB(grad)
- optim = get_GB(optim)
- wgrad = get_GB(wgrad)
-
- cpu_mem = 0
- overhead = 0
-
- if args.zero == 1:
- if not args.offload:
- # assumes PCIe 4.0 infiniband (200 Gbit/s = 25 GB/s)
- overhead += optim/25
-
- optim = optim / args.ngpus
- elif args.zero == 2:
- if not args.offload:
- # assumes PCIe 4.0 infiniband (200 Gbit/s = 25 GB/s)
- overhead += optim/25
- overhead += wgrad/25
-
- optim = optim / args.ngpus
- wgrad = wgrad / args.ngpus
- elif args.zero == 3:
- if not args.offload:
- # assumes PCIe 4.0 infiniband (200 Gbit/s = 25 GB/s)
- overhead += optim/25
- overhead += model/25
- overhead += wgrad/25
-
- optim = optim / args.ngpus
- model = model / args.ngpus
- wgrad = wgrad / args.ngpus
-
-
- if args.offload:
- cpu_mem = optim + wgrad
- optim = 0
- wgrad = 0
- if args.ngpus <= 2:
- # 12 GB/s for PCIe 3.0 and 1-2x GPU setup (16 lanes, 16 GB/s theoretical)
- overhead = cpu_mem/12
- else:
- # 6 GB/s for PCIe 3.0 and 4x GPU setup
- overhead = cpu_mem/6
-
-
- total_mem = model + grad + optim + wgrad
- return locals()
-
-
-if __name__ == '__main__':
- args = parse_args()
- mem = calculate_memory(args)
- print('')
- print(f'Model: {args.model} with batch size {args.bsz} and sequence length {args.seqlen} and a total of {mem["parameters"]/1e9:.4f}B parameters.')
- print('='*80)
- print('Weight memory: {0:.2f} GB ({1:.2f}%)'.format(mem['model'], 100*mem['model']/mem['total_mem']))
- print('Weight gradient memory: {0:.2f} GB ({1:.2f}%)'.format(mem['wgrad'], 100*mem['wgrad']/mem['total_mem']))
- print('Input gradient memory: {0:.2f} GB ({1:.2f}%)'.format(mem['grad'], 100*mem['grad']/mem['total_mem']))
- print('Optimizer memory: {0:.2f} GB ({1:.2f}%)'.format(mem['optim'], 100*mem['optim']/mem['total_mem']))
- print('Total GPU memory: {0:.2f} GB'.format(mem['total_mem']))
- if mem['cpu_mem'] > 0:
- print('Total CPU memory: {0:.2f} GB'.format(mem['cpu_mem']))
- if mem['overhead'] > 0:
- print('Overhead: {0:.2f} seconds per update (can be partially overlapped with compute)'.format(mem['overhead']))
diff --git a/spaces/ucalyptus/DragGAN-unofficial/drag_gan.py b/spaces/ucalyptus/DragGAN-unofficial/drag_gan.py
deleted file mode 100644
index fca2d3a39b6ad0e4dcd4ff952988a7c61c4c599f..0000000000000000000000000000000000000000
--- a/spaces/ucalyptus/DragGAN-unofficial/drag_gan.py
+++ /dev/null
@@ -1,243 +0,0 @@
-import copy
-import os
-import random
-import urllib.request
-
-import numpy as np
-import torch
-import torch.nn.functional as FF
-import torch.optim
-from torchvision import utils
-from tqdm import tqdm
-
-from stylegan2.model import Generator
-
-
-class DownloadProgressBar(tqdm):
- def update_to(self, b=1, bsize=1, tsize=None):
- if tsize is not None:
- self.total = tsize
- self.update(b * bsize - self.n)
-
-
-def get_path(base_path):
- BASE_DIR = os.path.join('checkpoints')
-
- save_path = os.path.join(BASE_DIR, base_path)
- if not os.path.exists(save_path):
- url = f"https://huggingface.co/aaronb/StyleGAN2/resolve/main/{base_path}"
- print(f'{base_path} not found')
- print('Try to download from huggingface: ', url)
- os.makedirs(os.path.dirname(save_path), exist_ok=True)
- download_url(url, save_path)
- print('Downloaded to ', save_path)
- return save_path
-
-
-def download_url(url, output_path):
- with DownloadProgressBar(unit='B', unit_scale=True,
- miniters=1, desc=url.split('/')[-1]) as t:
- urllib.request.urlretrieve(url, filename=output_path, reporthook=t.update_to)
-
-
-class CustomGenerator(Generator):
- def prepare(
- self,
- styles,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f"noise_{i}") for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- return latent, noise
-
- def generate(
- self,
- latent,
- noise,
- ):
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
- if out.shape[-1] == 256: F = out
- i += 2
-
- image = skip
- F = FF.interpolate(F, image.shape[-2:], mode='bilinear')
- return image, F
-
-
-def stylegan2(
- size=1024,
- channel_multiplier=2,
- latent=512,
- n_mlp=8,
- ckpt='stylegan2-ffhq-config-f.pt'
-):
- g_ema = CustomGenerator(size, latent, n_mlp, channel_multiplier=channel_multiplier)
- checkpoint = torch.load(get_path(ckpt))
- g_ema.load_state_dict(checkpoint["g_ema"], strict=False)
- g_ema.requires_grad_(False)
- g_ema.eval()
- return g_ema
-
-
-def bilinear_interpolate_torch(im, y, x):
- """
- im : B,C,H,W
- y : 1,numPoints -- pixel location y float
- x : 1,numPOints -- pixel location y float
- """
-
- x0 = torch.floor(x).long()
- x1 = x0 + 1
-
- y0 = torch.floor(y).long()
- y1 = y0 + 1
-
- wa = (x1.float() - x) * (y1.float() - y)
- wb = (x1.float() - x) * (y - y0.float())
- wc = (x - x0.float()) * (y1.float() - y)
- wd = (x - x0.float()) * (y - y0.float())
- # Instead of clamp
- x1 = x1 - torch.floor(x1 / im.shape[3]).int()
- y1 = y1 - torch.floor(y1 / im.shape[2]).int()
- Ia = im[:, :, y0, x0]
- Ib = im[:, :, y1, x0]
- Ic = im[:, :, y0, x1]
- Id = im[:, :, y1, x1]
-
- return Ia * wa + Ib * wb + Ic * wc + Id * wd
-
-
-def drag_gan(g_ema, latent: torch.Tensor, noise, F, handle_points, target_points, mask, max_iters=1000):
- handle_points0 = copy.deepcopy(handle_points)
- n = len(handle_points)
- r1, r2, lam, d = 3, 12, 20, 1
-
- def neighbor(x, y, d):
- points = []
- for i in range(x - d, x + d):
- for j in range(y - d, y + d):
- points.append(torch.tensor([i, j]).float().cuda())
- return points
-
- F0 = F.detach().clone()
- # latent = latent.detach().clone().requires_grad_(True)
- latent_trainable = latent[:, :6, :].detach().clone().requires_grad_(True)
- latent_untrainable = latent[:, 6:, :].detach().clone().requires_grad_(False)
- optimizer = torch.optim.Adam([latent_trainable], lr=2e-3)
- for iter in range(max_iters):
- for s in range(1):
- optimizer.zero_grad()
- latent = torch.cat([latent_trainable, latent_untrainable], dim=1)
- sample2, F2 = g_ema.generate(latent, noise)
-
- # motion supervision
- loss = 0
- for i in range(n):
- pi, ti = handle_points[i], target_points[i]
- di = (ti - pi) / torch.sum((ti - pi)**2)
-
- for qi in neighbor(int(pi[0]), int(pi[1]), r1):
- # f1 = F[..., int(qi[0]), int(qi[1])]
- # f2 = F2[..., int(qi[0] + di[0]), int(qi[1] + di[1])]
- f1 = bilinear_interpolate_torch(F2, qi[0], qi[1]).detach()
- f2 = bilinear_interpolate_torch(F2, qi[0] + di[0], qi[1] + di[1])
- loss += FF.l1_loss(f2, f1)
-
- # loss += ((F-F0) * (1-mask)).abs().mean() * lam
-
- loss.backward()
- optimizer.step()
-
- print(latent_trainable[0, 0, :10])
- # if s % 10 ==0:
- # utils.save_image(sample2, "test2.png", normalize=True, range=(-1, 1))
-
- # point tracking
- with torch.no_grad():
- sample2, F2 = g_ema.generate(latent, noise)
- for i in range(n):
- pi = handle_points0[i]
- # f = F0[..., int(pi[0]), int(pi[1])]
- f0 = bilinear_interpolate_torch(F0, pi[0], pi[1])
- minv = 1e9
- minx = 1e9
- miny = 1e9
- for qi in neighbor(int(handle_points[i][0]), int(handle_points[i][1]), r2):
- # f2 = F2[..., int(qi[0]), int(qi[1])]
- try:
- f2 = bilinear_interpolate_torch(F2, qi[0], qi[1])
- except:
- import ipdb
- ipdb.set_trace()
- v = torch.norm(f2 - f0, p=1)
- if v < minv:
- minv = v
- minx = int(qi[0])
- miny = int(qi[1])
- handle_points[i][0] = minx
- handle_points[i][1] = miny
-
- F = F2.detach().clone()
- if iter % 1 == 0:
- print(iter, loss.item(), handle_points, target_points)
- # p = handle_points[0].int()
- # sample2[0, :, p[0] - 5:p[0] + 5, p[1] - 5:p[1] + 5] = sample2[0, :, p[0] - 5:p[0] + 5, p[1] - 5:p[1] + 5] * 0
- # t = target_points[0].int()
- # sample2[0, :, t[0] - 5:t[0] + 5, t[1] - 5:t[1] + 5] = sample2[0, :, t[0] - 5:t[0] + 5, t[1] - 5:t[1] + 5] * 255
-
- # sample2[0, :, 210, 134] = sample2[0, :, 210, 134] * 0
- utils.save_image(sample2, "test2.png", normalize=True, range=(-1, 1))
-
- yield sample2, latent, F2
diff --git a/spaces/ucalyptus/PTI/models/StyleCLIP/optimization/run_optimization.py b/spaces/ucalyptus/PTI/models/StyleCLIP/optimization/run_optimization.py
deleted file mode 100644
index 766d0c81400951202bed51e3f1812e1260ccf071..0000000000000000000000000000000000000000
--- a/spaces/ucalyptus/PTI/models/StyleCLIP/optimization/run_optimization.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import argparse
-import math
-import os
-import pickle
-
-import torch
-import torchvision
-from torch import optim
-from tqdm import tqdm
-
-from StyleCLIP.criteria.clip_loss import CLIPLoss
-from StyleCLIP.models.stylegan2.model import Generator
-import clip
-from StyleCLIP.utils import ensure_checkpoint_exists
-
-
-def get_lr(t, initial_lr, rampdown=0.25, rampup=0.05):
- lr_ramp = min(1, (1 - t) / rampdown)
- lr_ramp = 0.5 - 0.5 * math.cos(lr_ramp * math.pi)
- lr_ramp = lr_ramp * min(1, t / rampup)
-
- return initial_lr * lr_ramp
-
-
-def main(args, use_old_G):
- ensure_checkpoint_exists(args.ckpt)
- text_inputs = torch.cat([clip.tokenize(args.description)]).cuda()
- os.makedirs(args.results_dir, exist_ok=True)
- new_generator_path = f'/disk2/danielroich/Sandbox/stylegan2_ada_pytorch/checkpoints/model_{args.run_id}_{args.image_name}.pt'
- old_generator_path = '/disk2/danielroich/Sandbox/pretrained_models/ffhq.pkl'
-
- if not use_old_G:
- with open(new_generator_path, 'rb') as f:
- G = torch.load(f).cuda().eval()
- else:
- with open(old_generator_path, 'rb') as f:
- G = pickle.load(f)['G_ema'].cuda().eval()
-
- if args.latent_path:
- latent_code_init = torch.load(args.latent_path).cuda()
- elif args.mode == "edit":
- latent_code_init_not_trunc = torch.randn(1, 512).cuda()
- with torch.no_grad():
- latent_code_init = G.mapping(latent_code_init_not_trunc, None)
-
- latent = latent_code_init.detach().clone()
- latent.requires_grad = True
-
- clip_loss = CLIPLoss(args)
-
- optimizer = optim.Adam([latent], lr=args.lr)
-
- pbar = tqdm(range(args.step))
-
- for i in pbar:
- t = i / args.step
- lr = get_lr(t, args.lr)
- optimizer.param_groups[0]["lr"] = lr
-
- img_gen = G.synthesis(latent, noise_mode='const')
-
- c_loss = clip_loss(img_gen, text_inputs)
-
- if args.mode == "edit":
- l2_loss = ((latent_code_init - latent) ** 2).sum()
- loss = c_loss + args.l2_lambda * l2_loss
- else:
- loss = c_loss
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- pbar.set_description(
- (
- f"loss: {loss.item():.4f};"
- )
- )
- if args.save_intermediate_image_every > 0 and i % args.save_intermediate_image_every == 0:
- with torch.no_grad():
- img_gen = G.synthesis(latent, noise_mode='const')
-
- torchvision.utils.save_image(img_gen,
- f"/disk2/danielroich/Sandbox/StyleCLIP/results/inference_results/{str(i).zfill(5)}.png",
- normalize=True, range=(-1, 1))
-
- if args.mode == "edit":
- with torch.no_grad():
- img_orig = G.synthesis(latent_code_init, noise_mode='const')
-
- final_result = torch.cat([img_orig, img_gen])
- else:
- final_result = img_gen
-
- return final_result
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--description", type=str, default="a person with purple hair",
- help="the text that guides the editing/generation")
- parser.add_argument("--ckpt", type=str, default="../pretrained_models/stylegan2-ffhq-config-f.pt",
- help="pretrained StyleGAN2 weights")
- parser.add_argument("--stylegan_size", type=int, default=1024, help="StyleGAN resolution")
- parser.add_argument("--lr_rampup", type=float, default=0.05)
- parser.add_argument("--lr", type=float, default=0.1)
- parser.add_argument("--step", type=int, default=300, help="number of optimization steps")
- parser.add_argument("--mode", type=str, default="edit", choices=["edit", "free_generation"],
- help="choose between edit an image an generate a free one")
- parser.add_argument("--l2_lambda", type=float, default=0.008,
- help="weight of the latent distance (used for editing only)")
- parser.add_argument("--latent_path", type=str, default=None,
- help="starts the optimization from the given latent code if provided. Otherwose, starts from"
- "the mean latent in a free generation, and from a random one in editing. "
- "Expects a .pt format")
- parser.add_argument("--truncation", type=float, default=0.7,
- help="used only for the initial latent vector, and only when a latent code path is"
- "not provided")
- parser.add_argument("--save_intermediate_image_every", type=int, default=20,
- help="if > 0 then saves intermidate results during the optimization")
- parser.add_argument("--results_dir", type=str, default="results")
-
- args = parser.parse_args()
-
- result_image = main(args)
-
- torchvision.utils.save_image(result_image.detach().cpu(), os.path.join(args.results_dir, "final_result.jpg"),
- normalize=True, scale_each=True, range=(-1, 1))
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/All Alone 1 Full Movie In Hindi 720p Download.md b/spaces/usbethFlerru/sovits-modelsV2/example/All Alone 1 Full Movie In Hindi 720p Download.md
deleted file mode 100644
index c63ab31dc866fc065aa91b0c58808b559a46fb25..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/All Alone 1 Full Movie In Hindi 720p Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-All Alone 1 Full Movie In Hindi 720p Download Download Zip ⚹⚹⚹ https://urlcod.com/2uyXcs
-
-Download Home Alone Movie Dual Audio (Hindi-English) in 720p quality in 1.1GB. ... Org is The Best Website/Platform For Bollywood And Hollywood HD Movies. ... Download Home Alone 1 Hindi 720p~ moviesverse.org ... Being home alone was fun for Kevin, having a pizza all to himself, jumping on his ... 1fdad05405
-
-
-
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Blade Runner 1982 Final Cut 1080p Bluray X264 Anoxmous Legenda HOT.md b/spaces/usbethFlerru/sovits-modelsV2/example/Blade Runner 1982 Final Cut 1080p Bluray X264 Anoxmous Legenda HOT.md
deleted file mode 100644
index bd8bafd2d4a92b87a6cda63c27c3368e8906f85d..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Blade Runner 1982 Final Cut 1080p Bluray X264 Anoxmous Legenda HOT.md
+++ /dev/null
@@ -1,8 +0,0 @@
-blade runner 1982 final cut 1080p bluray x264 anoxmous legenda Download File ⚙ https://urlcod.com/2uyVON
-
-WS, is the most perfect choice for you to enjoy the best movie. The expression 'Blade Runner' was released in 1982 and it belongs to Drama category. Good yet would you like to watch new movie without spending too much? You can download full Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS without cost by clicking the download link above, you can also watch many other movies with lots of categories at The file hosting sof.Watch Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS, Download it right now!
-
-Blade Runner: The Final Cut. All on One Disc, in High Definition, Four Candles.. This is the third edition of the Blade Runner Trilogy on a Blu-ray. The Film is presented in the high.Blade Runner: The Final Cut. Best Movie ever. watch in 480p 1080p. subtitles. Watch online free and Download.Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS, Blade Runner.Buy Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS. Blu-ray.. Watch Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS. Blade.Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS..Download Blade Runner 1982 Subtitle from a database of thousands of subtitles in more. Runner.The.Final.Cut.1982.TrueHD.AC3.MULTISUBS.1080p.BluRay.x264.WS, is the most perfect choice for you to enjoy the best movie. The expression 'Blade Runner' was released in 1982 and it 4fefd39f24
-
-
-
diff --git a/spaces/valhalla/glide-text2im/glide_text2im/respace.py b/spaces/valhalla/glide-text2im/glide_text2im/respace.py
deleted file mode 100644
index fa0e3972184f83a3bea359f25f53a9e69d691d3a..0000000000000000000000000000000000000000
--- a/spaces/valhalla/glide-text2im/glide_text2im/respace.py
+++ /dev/null
@@ -1,117 +0,0 @@
-"""
-Utilities for changing sampling schedules of a trained model.
-
-Simplified from: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/respace.py
-"""
-
-import numpy as np
-import torch as th
-
-from .gaussian_diffusion import GaussianDiffusion
-
-
-def space_timesteps(num_timesteps, section_counts):
- """
- Create a list of timesteps to use from an original diffusion process,
- given the number of timesteps we want to take from equally-sized portions
- of the original process.
-
- For example, if there's 300 timesteps and the section counts are [10,15,20]
- then the first 100 timesteps are strided to be 10 timesteps, the second 100
- are strided to be 15 timesteps, and the final 100 are strided to be 20.
-
- :param num_timesteps: the number of diffusion steps in the original
- process to divide up.
- :param section_counts: either a list of numbers, or a string containing
- comma-separated numbers, indicating the step count
- per section. As a special case, use "ddimN" where N
- is a number of steps to use the striding from the
- DDIM paper.
- :return: a set of diffusion steps from the original process to use.
- """
- if isinstance(section_counts, str):
- if section_counts.startswith("ddim"):
- desired_count = int(section_counts[len("ddim") :])
- for i in range(1, num_timesteps):
- if len(range(0, num_timesteps, i)) == desired_count:
- return set(range(0, num_timesteps, i))
- raise ValueError(f"cannot create exactly {num_timesteps} steps with an integer stride")
- elif section_counts == "fast27":
- steps = space_timesteps(num_timesteps, "10,10,3,2,2")
- # Help reduce DDIM artifacts from noisiest timesteps.
- steps.remove(num_timesteps - 1)
- steps.add(num_timesteps - 3)
- return steps
- section_counts = [int(x) for x in section_counts.split(",")]
- size_per = num_timesteps // len(section_counts)
- extra = num_timesteps % len(section_counts)
- start_idx = 0
- all_steps = []
- for i, section_count in enumerate(section_counts):
- size = size_per + (1 if i < extra else 0)
- if size < section_count:
- raise ValueError(f"cannot divide section of {size} steps into {section_count}")
- if section_count <= 1:
- frac_stride = 1
- else:
- frac_stride = (size - 1) / (section_count - 1)
- cur_idx = 0.0
- taken_steps = []
- for _ in range(section_count):
- taken_steps.append(start_idx + round(cur_idx))
- cur_idx += frac_stride
- all_steps += taken_steps
- start_idx += size
- return set(all_steps)
-
-
-class SpacedDiffusion(GaussianDiffusion):
- """
- A diffusion process which can skip steps in a base diffusion process.
-
- :param use_timesteps: a collection (sequence or set) of timesteps from the
- original diffusion process to retain.
- :param kwargs: the kwargs to create the base diffusion process.
- """
-
- def __init__(self, use_timesteps, **kwargs):
- self.use_timesteps = set(use_timesteps)
- self.timestep_map = []
- self.original_num_steps = len(kwargs["betas"])
-
- base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa
- last_alpha_cumprod = 1.0
- new_betas = []
- for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod):
- if i in self.use_timesteps:
- new_betas.append(1 - alpha_cumprod / last_alpha_cumprod)
- last_alpha_cumprod = alpha_cumprod
- self.timestep_map.append(i)
- kwargs["betas"] = np.array(new_betas)
- super().__init__(**kwargs)
-
- def p_mean_variance(self, model, *args, **kwargs):
- return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
-
- def condition_mean(self, cond_fn, *args, **kwargs):
- return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs)
-
- def condition_score(self, cond_fn, *args, **kwargs):
- return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs)
-
- def _wrap_model(self, model):
- if isinstance(model, _WrappedModel):
- return model
- return _WrappedModel(model, self.timestep_map, self.original_num_steps)
-
-
-class _WrappedModel:
- def __init__(self, model, timestep_map, original_num_steps):
- self.model = model
- self.timestep_map = timestep_map
- self.original_num_steps = original_num_steps
-
- def __call__(self, x, ts, **kwargs):
- map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype)
- new_ts = map_tensor[ts]
- return self.model(x, new_ts, **kwargs)
diff --git a/spaces/versus666/uplift_lab/README.md b/spaces/versus666/uplift_lab/README.md
deleted file mode 100644
index 91822d4b87fc4802f8b104009e244ef0f921c646..0000000000000000000000000000000000000000
--- a/spaces/versus666/uplift_lab/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Uplift Lab
-emoji: 🚀 🚀 🚀
-colorFrom: red
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: src/app.py
-pinned: false
----
\ No newline at end of file
diff --git a/spaces/vinceL/YonKomaMangaGenerator/sample_storyboards/stevejobs.md b/spaces/vinceL/YonKomaMangaGenerator/sample_storyboards/stevejobs.md
deleted file mode 100644
index db8323efd5fe3a16e7b2f61f5ce821a6af802216..0000000000000000000000000000000000000000
--- a/spaces/vinceL/YonKomaMangaGenerator/sample_storyboards/stevejobs.md
+++ /dev/null
@@ -1,58 +0,0 @@
-{
-"slide_deck": {
- "title": "Dive Into the Future: AI-Powered Scuba Diving Experience",
- "step_by_step_thinking_for_designing_your_storyboard": "Each slide is crafted to evoke a key emotional response. I begin with a promise of an unprecedented adventure, followed by a contextual picture of the current scuba diving experiences. I reveal the 'villain', traditional scuba diving methods, and elevate the stakes by highlighting potential safety and convenience threats with these methods. I unveil our solution, the revolutionary AI-powered scuba diving gear, branching its services into three key aspects for an easier understanding. I repeatedly raise the stakes, stating additional features and comparing them to conventional gear. I end by reinforcing our commitment to groundbreaking innovations.",
- "step_by_step_thinking_for_effectively_applying_the_framework": "Employing Jobs' framework, I start with a compelling promise, pique interest by presenting the gaps in current diving experiences, and position our product as the much-needed hero. I constantly raise the stakes, combining the thrill of exploration with safety and convenience of AI technology. I emphasize on continuous innovation and revolution, echoing Jobs' sentiment about Apple. Above all, I always circle back to our main promise, steering the narrative towards the future of scuba diving.",
- "slides": [
- {
- "id": 1,
- "type": "Make your Promise",
- "slide_design_description": "A serene underwater scene with vibrant marine life. A silhouette of a scuba diver in the background, overlaid with a futuristic blue interface displaying AI analytics. The screen is lightly overlaid with a watermark of an AI circuit pattern.",
- "image_generation_prompt": "Scuba diver silhouette, underwater setting with vibrant marine life, overlaid futuristic AI analytics on the screen, art form: Manga, additional: Artstation, 4K",
- "spoken_text": "We are about to take you into an unprecedented deep-water journey, as we introduce a scuba diving innovation that will change the course of underwater exploration."
- },
- {
- "id": 2,
- "type": "Share need-to-know context",
- "slide_design_description": "The slide is split in half. The left side shows a frustrated scuba diver wrestling with complex gears while right side depicts the same diver, calm and relaxed, assisted by a smart AI diving gear.",
- "image_generation_prompt": "Frustrated scuba diver on left, calm diver with AI gear on right, setting: underwater, action: comparison of experiences, art form: Manga, additional: movie still",
- "spoken_text": "Scuba diving has been a game of skill, courage, experience and cumbersome gears. But imagine if we could replace 'cumbersome' and 'experience' with 'intelligence' and 'efficiency'!"
- },
- {
- "id": 3,
- "type": "Introduce conflict / Create a villain",
- "slide_design_description": "The slide showcases the chaos of traditional scuba diving with bent regulators, twisted hoses and foggy masks, symbolizing the disarray of outdated methods.",
- "image_generation_prompt": "Chaotic scuba gears including bent regulators, twisted hoses, foggy masks, setting: underwater, action: disorder in representation, art form: Realistic drawing, additional: Artstation quality",
- "spoken_text": "Traditional diving gears: bulky, hard to handle and restricts you from truly experiencing the wonders of the deep."
- },
- {
- "id": 4,
- "type": "Raise the stakes",
- "slide_design_description": "Large bold text 'REIMAGINING DIVE', with dizzying depth of the ocean below it and a dynamic wave-like transition from the chaos of previous slide towards a lighter, brighter horizon.",
- "image_generation_prompt": "Bold text 'REIMAGINING DIVE', deep ocean setting, transition from chaotic to illuminated scene, art form: Digital painting, additional: Artstation, 4K",
- "spoken_text": "We are here to conquer this chaos. We are here to turn the 'complexity' tide and revolutionize your dive."
- },
- {
- "id": 5,
- "type": "Show off the solution",
- "slide_design_description": "The slide is divided into three parts showing our product offering seamless communication, deep-sea navigation, and safety measures. The end shows all three merging into one AI-equipped dive gear.",
- "image_generation_prompt": "Three parts showcasing communication, navigation, and safety features, merging to create an AI-equipped dive gear, art form: Manga, additional: High-resolution",
- "spoken_text": "We present an intelligent dive-guide, a hands-free communication module, and automatic alarm and response system. These aren't three; this is one: Our AI-powered scuba gear."
- },
- {
- "id": 6,
- "type": "Raise the stakes again",
- "slide_design_description": "Strip of gear features lighting up as tangible benefits they offer for divers are explained. Finally, the price appears equalling the average cost of traditional gears.",
- "image_generation_prompt": "Gear features lighting up, action: explaining benefits, then showing the price equal to traditional gears, art form: Comic strip style, additional: high-contrast color",
- "spoken_text": "Easy installation, intuitive interface, depth and co-diver tracking, seamless communication, emergency services - all of these come at the same price as the current gear. Which one would you dive with?"
- },
- {
- "id": 7,
- "type": "Reinforce your main message",
- "slide_design_description": "The closing slide features the diver from the first slide floating towards the surface with the sunlight illuminating her path. Company's logo appears with the quote 'Diving into the future, oceans at a time.'",
- "image_generation_prompt": "Diver floating towards the surface, rays of sunlight streaming down, company's logo, text: 'Diving into the future, oceans at a time', art form: Anime style, additional: sunset glow",
- "spoken_text": "An exhilarating adventure has unfolded, changing the face of scuba diving. Like the fearless explorers of the oceans, we too strive to innovate and revolutionize, one dive at a time."
- }
- ]
-}
-}
\ No newline at end of file
diff --git a/spaces/xiaoxuezi/spleeter/spleeter/model/functions/blstm.py b/spaces/xiaoxuezi/spleeter/spleeter/model/functions/blstm.py
deleted file mode 100644
index 6dc63bc1931ff2688056bfcb279570c26a4d471e..0000000000000000000000000000000000000000
--- a/spaces/xiaoxuezi/spleeter/spleeter/model/functions/blstm.py
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/usr/bin/env python
-# coding: utf8
-
-"""
- This system (UHL1) uses a bi-directional LSTM network as described in :
-
- `S. Uhlich, M. Porcu, F. Giron, M. Enenkl, T. Kemp, N. Takahashi and
- Y. Mitsufuji.
-
- "Improving music source separation based on deep neural networks through
- data augmentation and network blending", Proc. ICASSP, 2017.`
-
- It has three BLSTM layers, each having 500 cells. For each instrument,
- a network is trained which predicts the target instrument amplitude from
- the mixture amplitude in the STFT domain (frame size: 4096, hop size:
- 1024). The raw output of each network is then combined by a multichannel
- Wiener filter. The network is trained on musdb where we split train into
- train_train and train_valid with 86 and 14 songs, respectively. The
- validation set is used to perform early stopping and hyperparameter
- selection (LSTM layer dropout rate, regularization strength).
-"""
-
-from typing import Dict, Optional
-
-# pyright: reportMissingImports=false
-# pylint: disable=import-error
-import tensorflow as tf
-from tensorflow.compat.v1.keras.initializers import he_uniform
-from tensorflow.compat.v1.keras.layers import CuDNNLSTM
-from tensorflow.keras.layers import (
- Bidirectional,
- Dense,
- Flatten,
- Reshape,
- TimeDistributed,
-)
-
-from . import apply
-
-# pylint: enable=import-error
-
-__email__ = "spleeter@deezer.com"
-__author__ = "Deezer Research"
-__license__ = "MIT License"
-
-
-def apply_blstm(
- input_tensor: tf.Tensor, output_name: str = "output", params: Optional[Dict] = None
-) -> tf.Tensor:
- """
- Apply BLSTM to the given input_tensor.
-
- Parameters:
- input_tensor (tensorflow.Tensor):
- Input of the model.
- output_name (str):
- (Optional) name of the output, default to 'output'.
- params (Optional[Dict]):
- (Optional) dict of BLSTM parameters.
-
- Returns:
- tensorflow.Tensor:
- Output tensor.
- """
- if params is None:
- params = {}
- units: int = params.get("lstm_units", 250)
- kernel_initializer = he_uniform(seed=50)
- flatten_input = TimeDistributed(Flatten())((input_tensor))
-
- def create_bidirectional():
- return Bidirectional(
- CuDNNLSTM(
- units, kernel_initializer=kernel_initializer, return_sequences=True
- )
- )
-
- l1 = create_bidirectional()((flatten_input))
- l2 = create_bidirectional()((l1))
- l3 = create_bidirectional()((l2))
- dense = TimeDistributed(
- Dense(
- int(flatten_input.shape[2]),
- activation="relu",
- kernel_initializer=kernel_initializer,
- )
- )((l3))
- output: tf.Tensor = TimeDistributed(
- Reshape(input_tensor.shape[2:]), name=output_name
- )(dense)
- return output
-
-
-def blstm(
- input_tensor: tf.Tensor, output_name: str = "output", params: Optional[Dict] = None
-) -> tf.Tensor:
- """ Model function applier. """
- return apply(apply_blstm, input_tensor, output_name, params)
diff --git a/spaces/xnetba/Chat_advance/modules/models/__init__.py b/spaces/xnetba/Chat_advance/modules/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/xp3857/text-to-image/style.css b/spaces/xp3857/text-to-image/style.css
deleted file mode 100644
index bc65aa93b12419e2df38b0c9d7191184aaf851d4..0000000000000000000000000000000000000000
--- a/spaces/xp3857/text-to-image/style.css
+++ /dev/null
@@ -1,113 +0,0 @@
-.app.svelte-p7tiy3.svelte-p7tiy3{
- background:None;
-}
-.unpadded_box.large.svelte-1vhybi6{
- background:None6fbcffa8;
- min-height:100%;
-}
-span.svelte-1l2rj76{
- color:white;!important;
-}
-div.svelte-1fwqiwq .block{
- background:None4d8df1;
-}
-.lg.svelte-1h4gtph{
- background:None4d8df1;
- color:white;
- height:100px;
-}
-#restart{
- position: relative;
- font-family: "Poppins",sans-serif;
- text-align: center;
- border-radius: 8px;
- background: #0063f787;
- border-style: solid;
- border-width: 1px;
- border-color: #ffffff;
- width: 100%;
- height: 50%;
- max-height: 200px;
- padding: 0px 10px;
- transform: translate(-50%,0%);
- left: 50%;
-}
-#head{
- color:white;
- margin-top:15px;
- margin-bottom:5px;
-}
-#cont{
- color: white;
- margin-top: 5px;
- margin-bottom: 15px;
- font-size: 1.1rem;
-}
-
-.lds-ellipsis {
- display: inline-block;
- position: relative;
- width: 80px;
- height: 80px;
-
-}
-.lds-ellipsis div {
- position: absolute;
- z-index:199999;
-
- top: 33px;
- width: 13px;
- height: 13px;
- border-radius: 50%;
- background: blue;
- animation-timing-function: cubic-bezier(0, 1, 1, 0);
-}
-.lds-ellipsis div:nth-child(1) {
- left: 8px;
- animation: lds-ellipsis1 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(2) {
- left: 8px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(3) {
- left: 32px;
- animation: lds-ellipsis2 0.6s infinite;
-}
-.lds-ellipsis div:nth-child(4) {
- left: 56px;
- animation: lds-ellipsis3 0.6s infinite;
-}
-@keyframes lds-ellipsis1 {
- 0% {
- transform: scale(0);
- }
- 100% {
- transform: scale(1);
- }
-}
-@keyframes lds-ellipsis3 {
- 0% {
- transform: scale(1);
- }
- 100% {
- transform: scale(0);
- }frames lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
-
-}
-@keyframes lds-ellipsis2 {
- 0% {
- transform: translate(0, 0);
- }
- 100% {
- transform: translate(24px, 0);
- }
-}
-
diff --git a/spaces/xuetao/bingo3/src/components/ui/codeblock.tsx b/spaces/xuetao/bingo3/src/components/ui/codeblock.tsx
deleted file mode 100644
index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000
--- a/spaces/xuetao/bingo3/src/components/ui/codeblock.tsx
+++ /dev/null
@@ -1,142 +0,0 @@
-'use client'
-
-import { FC, memo } from 'react'
-import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'
-import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism'
-
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons'
-import { Button } from '@/components/ui/button'
-
-interface Props {
- language: string
- value: string
-}
-
-interface languageMap {
- [key: string]: string | undefined
-}
-
-export const programmingLanguages: languageMap = {
- javascript: '.js',
- python: '.py',
- java: '.java',
- c: '.c',
- cpp: '.cpp',
- 'c++': '.cpp',
- 'c#': '.cs',
- ruby: '.rb',
- php: '.php',
- swift: '.swift',
- 'objective-c': '.m',
- kotlin: '.kt',
- typescript: '.ts',
- go: '.go',
- perl: '.pl',
- rust: '.rs',
- scala: '.scala',
- haskell: '.hs',
- lua: '.lua',
- shell: '.sh',
- sql: '.sql',
- html: '.html',
- css: '.css'
- // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component
-}
-
-export const generateRandomString = (length: number, lowercase = false) => {
- const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0
- let result = ''
- for (let i = 0; i < length; i++) {
- result += chars.charAt(Math.floor(Math.random() * chars.length))
- }
- return lowercase ? result.toLowerCase() : result
-}
-
-const CodeBlock: FC = memo(({ language, value }) => {
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
-
- const downloadAsFile = () => {
- if (typeof window === 'undefined') {
- return
- }
- const fileExtension = programmingLanguages[language] || '.file'
- const suggestedFileName = `file-${generateRandomString(
- 3,
- true
- )}${fileExtension}`
- const fileName = window.prompt('Enter file name' || '', suggestedFileName)
-
- if (!fileName) {
- // User pressed cancel on prompt.
- return
- }
-
- const blob = new Blob([value], { type: 'text/plain' })
- const url = URL.createObjectURL(blob)
- const link = document.createElement('a')
- link.download = fileName
- link.href = url
- link.style.display = 'none'
- document.body.appendChild(link)
- link.click()
- document.body.removeChild(link)
- URL.revokeObjectURL(url)
- }
-
- const onCopy = () => {
- if (isCopied) return
- copyToClipboard(value)
- }
-
- return (
-
-
-
{language}
-
-
-
- Download
-
-
- {isCopied ? : }
- Copy code
-
-
-
-
- {value}
-
-
- )
-})
-CodeBlock.displayName = 'CodeBlock'
-
-export { CodeBlock }
diff --git a/spaces/yangogo/bingo/src/components/user-menu.tsx b/spaces/yangogo/bingo/src/components/user-menu.tsx
deleted file mode 100644
index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000
--- a/spaces/yangogo/bingo/src/components/user-menu.tsx
+++ /dev/null
@@ -1,113 +0,0 @@
-'use client'
-
-import { useEffect, useState } from 'react'
-import Image from 'next/image'
-import { toast } from 'react-hot-toast'
-import { Button } from '@/components/ui/button'
-import pkg from '../../package.json'
-import {
- DropdownMenu,
- DropdownMenuContent,
- DropdownMenuItem,
- DropdownMenuSeparator,
- DropdownMenuTrigger
-} from '@/components/ui/dropdown-menu'
-import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons'
-import SettingIcon from '@/assets/images/settings.svg'
-import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard'
-
-export function UserMenu() {
- const [host, setHost] = useState('')
- const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 })
- useEffect(() => {
- setHost(location.host)
- }, [])
-
- useEffect(() => {
- if (isCopied) {
- toast.success('复制成功')
- }
- }, [isCopied])
- return (
-
-
-
-
-
-
-
- 设置
-
-
-
-
- location.href='#dialog="settings"'
- }
- className="cursor-pointer"
- >
- 设置用户
-
-
-
- location.href='#dialog="voice"'
- }
- className="cursor-pointer"
- >
- 语音设置
-
-
-
-
- 开源地址
-
-
-
-
-
-
-
- 托管地址
- 🤗
-
-
-
-
-
-
- 复制站点
-
-
-
-
-
- 版本信息 {pkg.version}
-
-
-
- 站点域名
- copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer">
- {host}
-
-
-
-
-
- )
-}
diff --git a/spaces/yangogo/bingo/tests/parse.ts b/spaces/yangogo/bingo/tests/parse.ts
deleted file mode 100644
index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000
--- a/spaces/yangogo/bingo/tests/parse.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { promises as fs } from 'fs'
-import { join } from 'path'
-import { parseHeadersFromCurl } from '@/lib/utils'
-
-(async () => {
- const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8')
- const headers = parseHeadersFromCurl(content)
- console.log(headers)
-
- const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8')
- const cmdHeaders = parseHeadersFromCurl(cmdContent)
- console.log(cmdHeaders)
-})()
diff --git a/spaces/yassTrad/extractiveSum/README.md b/spaces/yassTrad/extractiveSum/README.md
deleted file mode 100644
index 6d35e68b84a935be0ee8b445b729077fa6f78e19..0000000000000000000000000000000000000000
--- a/spaces/yassTrad/extractiveSum/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ExtractiveSum
-emoji: 👀
-colorFrom: purple
-colorTo: green
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/conv2d_gradfix.py b/spaces/ybelkada/interfacegan_pp/torch_utils/ops/conv2d_gradfix.py
deleted file mode 100644
index 388778fa971d7bc5c64b5fd6c0e5492863ee1c5f..0000000000000000000000000000000000000000
--- a/spaces/ybelkada/interfacegan_pp/torch_utils/ops/conv2d_gradfix.py
+++ /dev/null
@@ -1,198 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Custom replacement for `torch.nn.functional.conv2d` that supports
-arbitrarily high order gradients with zero performance penalty."""
-
-import contextlib
-import torch
-
-# pylint: disable=redefined-builtin
-# pylint: disable=arguments-differ
-# pylint: disable=protected-access
-
-#----------------------------------------------------------------------------
-
-enabled = False # Enable the custom op by setting this to true.
-weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights.
-
-@contextlib.contextmanager
-def no_weight_gradients(disable=True):
- global weight_gradients_disabled
- old = weight_gradients_disabled
- if disable:
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-#----------------------------------------------------------------------------
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if _should_use_custom_op(input):
- return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias)
- return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups)
-
-def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1):
- if _should_use_custom_op(input):
- return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias)
- return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation)
-
-#----------------------------------------------------------------------------
-
-def _should_use_custom_op(input):
- assert isinstance(input, torch.Tensor)
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
- if input.device.type != 'cuda':
- return False
- return True
-
-def _tuple_of_ints(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
- assert len(xs) == ndim
- assert all(isinstance(x, int) for x in xs)
- return xs
-
-#----------------------------------------------------------------------------
-
-_conv2d_gradfix_cache = dict()
-_null_tensor = torch.empty([0])
-
-def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups):
- # Parse arguments.
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = _tuple_of_ints(stride, ndim)
- padding = _tuple_of_ints(padding, ndim)
- output_padding = _tuple_of_ints(output_padding, ndim)
- dilation = _tuple_of_ints(dilation, ndim)
-
- # Lookup from cache.
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in _conv2d_gradfix_cache:
- return _conv2d_gradfix_cache[key]
-
- # Validate arguments.
- assert groups >= 1
- assert len(weight_shape) == ndim + 2
- assert all(stride[i] >= 1 for i in range(ndim))
- assert all(padding[i] >= 0 for i in range(ndim))
- assert all(dilation[i] >= 0 for i in range(ndim))
- if not transpose:
- assert all(output_padding[i] == 0 for i in range(ndim))
- else: # transpose
- assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim))
-
- # Helpers.
- common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups)
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- # Forward & backward.
- class Conv2d(torch.autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- assert weight.shape == weight_shape
- ctx.save_for_backward(
- input if weight.requires_grad else _null_tensor,
- weight if input.requires_grad else _null_tensor,
- )
- ctx.input_shape = input.shape
-
- # Simple 1x1 convolution => cuBLAS (only on Volta, not on Ampere).
- if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0) and torch.cuda.get_device_capability(input.device) < (8, 0):
- a = weight.reshape(groups, weight_shape[0] // groups, weight_shape[1])
- b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1)
- c = (a.transpose(1, 2) if transpose else a) @ b.permute(1, 2, 0, 3).flatten(2)
- c = c.reshape(-1, input.shape[0], *input.shape[2:]).transpose(0, 1)
- c = c if bias is None else c + bias.unsqueeze(0).unsqueeze(2).unsqueeze(3)
- return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format))
-
- # General case => cuDNN.
- if transpose:
- return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs)
- return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- input_shape = ctx.input_shape
- grad_input = None
- grad_weight = None
- grad_bias = None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(input_shape=input_shape, output_shape=grad_output.shape)
- op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs)
- grad_input = op.apply(grad_output, weight, None)
- assert grad_input.shape == input_shape
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
- assert grad_weight.shape == weight_shape
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum([0, 2, 3])
-
- return grad_input, grad_weight, grad_bias
-
- # Gradient with respect to the weights.
- class Conv2dGradWeight(torch.autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- ctx.save_for_backward(
- grad_output if input.requires_grad else _null_tensor,
- input if grad_output.requires_grad else _null_tensor,
- )
- ctx.grad_output_shape = grad_output.shape
- ctx.input_shape = input.shape
-
- # Simple 1x1 convolution => cuBLAS (on both Volta and Ampere).
- if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0):
- a = grad_output.reshape(grad_output.shape[0], groups, grad_output.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2)
- b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2)
- c = (b @ a.transpose(1, 2) if transpose else a @ b.transpose(1, 2)).reshape(weight_shape)
- return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format))
-
- # General case => cuDNN.
- name = 'aten::cudnn_convolution_transpose_backward_weight' if transpose else 'aten::cudnn_convolution_backward_weight'
- flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32]
- return torch._C._jit_get_operation(name)(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags)
-
- @staticmethod
- def backward(ctx, grad2_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_output_shape = ctx.grad_output_shape
- input_shape = ctx.input_shape
- grad2_grad_output = None
- grad2_input = None
-
- if ctx.needs_input_grad[0]:
- grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None)
- assert grad2_grad_output.shape == grad_output_shape
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(input_shape=input_shape, output_shape=grad_output_shape)
- op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs)
- grad2_input = op.apply(grad_output, grad2_grad_weight, None)
- assert grad2_input.shape == input_shape
-
- return grad2_grad_output, grad2_input
-
- _conv2d_gradfix_cache[key] = Conv2d
- return Conv2d
-
-#----------------------------------------------------------------------------
diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolSelector.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolSelector.tsx
deleted file mode 100644
index 5ecad77626f9b93eed73f017d18c3d558af9d05c..0000000000000000000000000000000000000000
--- a/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolSelector.tsx
+++ /dev/null
@@ -1,17 +0,0 @@
-import { observer } from "mobx-react-lite"
-import { useCallback } from "react"
-import { useStores } from "../../hooks/useStores"
-import { ToolSelector } from "../Toolbar/ToolSelector"
-
-export const PianoRollToolSelector = observer(() => {
- const { pianoRollStore } = useStores()
- return (
- (pianoRollStore.mouseMode = mouseMode),
- [],
- )}
- />
- )
-})
diff --git a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/utils/realesrgan_utils.py b/spaces/ygangang/CodeFormer/CodeFormer/basicsr/utils/realesrgan_utils.py
deleted file mode 100644
index ff94523b7ddd61f0b72280950fd36e1b8133bf4c..0000000000000000000000000000000000000000
--- a/spaces/ygangang/CodeFormer/CodeFormer/basicsr/utils/realesrgan_utils.py
+++ /dev/null
@@ -1,296 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import queue
-import threading
-import torch
-from basicsr.utils.download_util import load_file_from_url
-from torch.nn import functional as F
-
-# ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class RealESRGANer():
- """A helper class for upsampling images with RealESRGAN.
-
- Args:
- scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
- model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
- model (nn.Module): The defined network. Default: None.
- tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
- input images into tiles, and then process each of them. Finally, they will be merged into one image.
- 0 denotes for do not use tile. Default: 0.
- tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
- pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
- half (float): Whether to use half precision during inference. Default: False.
- """
-
- def __init__(self,
- scale,
- model_path,
- model=None,
- tile=0,
- tile_pad=10,
- pre_pad=10,
- half=False,
- device=None,
- gpu_id=None):
- self.scale = scale
- self.tile_size = tile
- self.tile_pad = tile_pad
- self.pre_pad = pre_pad
- self.mod_scale = None
- self.half = half
-
- # initialize model
- if gpu_id:
- self.device = torch.device(
- f'cuda:{gpu_id}' if torch.cuda.is_available() else 'cpu') if device is None else device
- else:
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device is None else device
- # if the model_path starts with https, it will first download models to the folder: realesrgan/weights
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join('weights/realesrgan'), progress=True, file_name=None)
- loadnet = torch.load(model_path, map_location=torch.device('cpu'))
- # prefer to use params_ema
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- model.load_state_dict(loadnet[keyname], strict=True)
- model.eval()
- self.model = model.to(self.device)
- if self.half:
- self.model = self.model.half()
-
- def pre_process(self, img):
- """Pre-process, such as pre-pad and mod pad, so that the images can be divisible
- """
- img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
- self.img = img.unsqueeze(0).to(self.device)
- if self.half:
- self.img = self.img.half()
-
- # pre_pad
- if self.pre_pad != 0:
- self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
- # mod pad for divisible borders
- if self.scale == 2:
- self.mod_scale = 2
- elif self.scale == 1:
- self.mod_scale = 4
- if self.mod_scale is not None:
- self.mod_pad_h, self.mod_pad_w = 0, 0
- _, _, h, w = self.img.size()
- if (h % self.mod_scale != 0):
- self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
- if (w % self.mod_scale != 0):
- self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
- self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
-
- def process(self):
- # model inference
- self.output = self.model(self.img)
-
- def tile_process(self):
- """It will first crop input images to tiles, and then process each tile.
- Finally, all the processed tiles are merged into one images.
-
- Modified from: https://github.com/ata4/esrgan-launcher
- """
- batch, channel, height, width = self.img.shape
- output_height = height * self.scale
- output_width = width * self.scale
- output_shape = (batch, channel, output_height, output_width)
-
- # start with black image
- self.output = self.img.new_zeros(output_shape)
- tiles_x = math.ceil(width / self.tile_size)
- tiles_y = math.ceil(height / self.tile_size)
-
- # loop over all tiles
- for y in range(tiles_y):
- for x in range(tiles_x):
- # extract tile from input image
- ofs_x = x * self.tile_size
- ofs_y = y * self.tile_size
- # input tile area on total image
- input_start_x = ofs_x
- input_end_x = min(ofs_x + self.tile_size, width)
- input_start_y = ofs_y
- input_end_y = min(ofs_y + self.tile_size, height)
-
- # input tile area on total image with padding
- input_start_x_pad = max(input_start_x - self.tile_pad, 0)
- input_end_x_pad = min(input_end_x + self.tile_pad, width)
- input_start_y_pad = max(input_start_y - self.tile_pad, 0)
- input_end_y_pad = min(input_end_y + self.tile_pad, height)
-
- # input tile dimensions
- input_tile_width = input_end_x - input_start_x
- input_tile_height = input_end_y - input_start_y
- tile_idx = y * tiles_x + x + 1
- input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
-
- # upscale tile
- try:
- with torch.no_grad():
- output_tile = self.model(input_tile)
- except RuntimeError as error:
- print('Error', error)
- # print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
-
- # output tile area on total image
- output_start_x = input_start_x * self.scale
- output_end_x = input_end_x * self.scale
- output_start_y = input_start_y * self.scale
- output_end_y = input_end_y * self.scale
-
- # output tile area without padding
- output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
- output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
- output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
- output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
-
- # put tile into output image
- self.output[:, :, output_start_y:output_end_y,
- output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
- output_start_x_tile:output_end_x_tile]
-
- def post_process(self):
- # remove extra pad
- if self.mod_scale is not None:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
- # remove prepad
- if self.pre_pad != 0:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
- return self.output
-
- @torch.no_grad()
- def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
- h_input, w_input = img.shape[0:2]
- # img: numpy
- img = img.astype(np.float32)
- if np.max(img) > 256: # 16-bit image
- max_range = 65535
- print('\tInput is a 16-bit image')
- else:
- max_range = 255
- img = img / max_range
- if len(img.shape) == 2: # gray image
- img_mode = 'L'
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- elif img.shape[2] == 4: # RGBA image with alpha channel
- img_mode = 'RGBA'
- alpha = img[:, :, 3]
- img = img[:, :, 0:3]
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if alpha_upsampler == 'realesrgan':
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
- else:
- img_mode = 'RGB'
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # ------------------- process image (without the alpha channel) ------------------- #
- with torch.no_grad():
- self.pre_process(img)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_img_t = self.post_process()
- output_img = output_img_t.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
- if img_mode == 'L':
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
- del output_img_t
- torch.cuda.empty_cache()
-
- # ------------------- process the alpha channel if necessary ------------------- #
- if img_mode == 'RGBA':
- if alpha_upsampler == 'realesrgan':
- self.pre_process(alpha)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_alpha = self.post_process()
- output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
- output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
- else: # use the cv2 resize for alpha channel
- h, w = alpha.shape[0:2]
- output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
-
- # merge the alpha channel
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
- output_img[:, :, 3] = output_alpha
-
- # ------------------------------ return ------------------------------ #
- if max_range == 65535: # 16-bit image
- output = (output_img * 65535.0).round().astype(np.uint16)
- else:
- output = (output_img * 255.0).round().astype(np.uint8)
-
- if outscale is not None and outscale != float(self.scale):
- output = cv2.resize(
- output, (
- int(w_input * outscale),
- int(h_input * outscale),
- ), interpolation=cv2.INTER_LANCZOS4)
-
- return output, img_mode
-
-
-class PrefetchReader(threading.Thread):
- """Prefetch images.
-
- Args:
- img_list (list[str]): A image list of image paths to be read.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, img_list, num_prefetch_queue):
- super().__init__()
- self.que = queue.Queue(num_prefetch_queue)
- self.img_list = img_list
-
- def run(self):
- for img_path in self.img_list:
- img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
- self.que.put(img)
-
- self.que.put(None)
-
- def __next__(self):
- next_item = self.que.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class IOConsumer(threading.Thread):
-
- def __init__(self, opt, que, qid):
- super().__init__()
- self._queue = que
- self.qid = qid
- self.opt = opt
-
- def run(self):
- while True:
- msg = self._queue.get()
- if isinstance(msg, str) and msg == 'quit':
- break
-
- output = msg['output']
- save_path = msg['save_path']
- cv2.imwrite(save_path, output)
- print(f'IO worker {self.qid} is done.')
\ No newline at end of file
diff --git a/spaces/ygangang/VToonify/vtoonify/model/raft/core/__init__.py b/spaces/ygangang/VToonify/vtoonify/model/raft/core/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/segment_anything/modeling/transformer.py b/spaces/yizhangliu/Grounded-Segment-Anything/segment_anything/modeling/transformer.py
deleted file mode 100644
index f1a2812f613cc55b1d0b3e3e1d0c84a760d1fb87..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/segment_anything/modeling/transformer.py
+++ /dev/null
@@ -1,240 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import Tensor, nn
-
-import math
-from typing import Tuple, Type
-
-from .common import MLPBlock
-
-
-class TwoWayTransformer(nn.Module):
- def __init__(
- self,
- depth: int,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- ) -> None:
- """
- A transformer decoder that attends to an input image using
- queries whose positional embedding is supplied.
-
- Args:
- depth (int): number of layers in the transformer
- embedding_dim (int): the channel dimension for the input embeddings
- num_heads (int): the number of heads for multihead attention. Must
- divide embedding_dim
- mlp_dim (int): the channel dimension internal to the MLP block
- activation (nn.Module): the activation to use in the MLP block
- """
- super().__init__()
- self.depth = depth
- self.embedding_dim = embedding_dim
- self.num_heads = num_heads
- self.mlp_dim = mlp_dim
- self.layers = nn.ModuleList()
-
- for i in range(depth):
- self.layers.append(
- TwoWayAttentionBlock(
- embedding_dim=embedding_dim,
- num_heads=num_heads,
- mlp_dim=mlp_dim,
- activation=activation,
- attention_downsample_rate=attention_downsample_rate,
- skip_first_layer_pe=(i == 0),
- )
- )
-
- self.final_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm_final_attn = nn.LayerNorm(embedding_dim)
-
- def forward(
- self,
- image_embedding: Tensor,
- image_pe: Tensor,
- point_embedding: Tensor,
- ) -> Tuple[Tensor, Tensor]:
- """
- Args:
- image_embedding (torch.Tensor): image to attend to. Should be shape
- B x embedding_dim x h x w for any h and w.
- image_pe (torch.Tensor): the positional encoding to add to the image. Must
- have the same shape as image_embedding.
- point_embedding (torch.Tensor): the embedding to add to the query points.
- Must have shape B x N_points x embedding_dim for any N_points.
-
- Returns:
- torch.Tensor: the processed point_embedding
- torch.Tensor: the processed image_embedding
- """
- # BxCxHxW -> BxHWxC == B x N_image_tokens x C
- bs, c, h, w = image_embedding.shape
- image_embedding = image_embedding.flatten(2).permute(0, 2, 1)
- image_pe = image_pe.flatten(2).permute(0, 2, 1)
-
- # Prepare queries
- queries = point_embedding
- keys = image_embedding
-
- # Apply transformer blocks and final layernorm
- for layer in self.layers:
- queries, keys = layer(
- queries=queries,
- keys=keys,
- query_pe=point_embedding,
- key_pe=image_pe,
- )
-
- # Apply the final attenion layer from the points to the image
- q = queries + point_embedding
- k = keys + image_pe
- attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm_final_attn(queries)
-
- return queries, keys
-
-
-class TwoWayAttentionBlock(nn.Module):
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- mlp_dim: int = 2048,
- activation: Type[nn.Module] = nn.ReLU,
- attention_downsample_rate: int = 2,
- skip_first_layer_pe: bool = False,
- ) -> None:
- """
- A transformer block with four layers: (1) self-attention of sparse
- inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp
- block on sparse inputs, and (4) cross attention of dense inputs to sparse
- inputs.
-
- Arguments:
- embedding_dim (int): the channel dimension of the embeddings
- num_heads (int): the number of heads in the attention layers
- mlp_dim (int): the hidden dimension of the mlp block
- activation (nn.Module): the activation of the mlp block
- skip_first_layer_pe (bool): skip the PE on the first layer
- """
- super().__init__()
- self.self_attn = Attention(embedding_dim, num_heads)
- self.norm1 = nn.LayerNorm(embedding_dim)
-
- self.cross_attn_token_to_image = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
- self.norm2 = nn.LayerNorm(embedding_dim)
-
- self.mlp = MLPBlock(embedding_dim, mlp_dim, activation)
- self.norm3 = nn.LayerNorm(embedding_dim)
-
- self.norm4 = nn.LayerNorm(embedding_dim)
- self.cross_attn_image_to_token = Attention(
- embedding_dim, num_heads, downsample_rate=attention_downsample_rate
- )
-
- self.skip_first_layer_pe = skip_first_layer_pe
-
- def forward(
- self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor
- ) -> Tuple[Tensor, Tensor]:
- # Self attention block
- if self.skip_first_layer_pe:
- queries = self.self_attn(q=queries, k=queries, v=queries)
- else:
- q = queries + query_pe
- attn_out = self.self_attn(q=q, k=q, v=queries)
- queries = queries + attn_out
- queries = self.norm1(queries)
-
- # Cross attention block, tokens attending to image embedding
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys)
- queries = queries + attn_out
- queries = self.norm2(queries)
-
- # MLP block
- mlp_out = self.mlp(queries)
- queries = queries + mlp_out
- queries = self.norm3(queries)
-
- # Cross attention block, image embedding attending to tokens
- q = queries + query_pe
- k = keys + key_pe
- attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries)
- keys = keys + attn_out
- keys = self.norm4(keys)
-
- return queries, keys
-
-
-class Attention(nn.Module):
- """
- An attention layer that allows for downscaling the size of the embedding
- after projection to queries, keys, and values.
- """
-
- def __init__(
- self,
- embedding_dim: int,
- num_heads: int,
- downsample_rate: int = 1,
- ) -> None:
- super().__init__()
- self.embedding_dim = embedding_dim
- self.internal_dim = embedding_dim // downsample_rate
- self.num_heads = num_heads
- assert self.internal_dim % num_heads == 0, "num_heads must divide embedding_dim."
-
- self.q_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.k_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.v_proj = nn.Linear(embedding_dim, self.internal_dim)
- self.out_proj = nn.Linear(self.internal_dim, embedding_dim)
-
- def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor:
- b, n, c = x.shape
- x = x.reshape(b, n, num_heads, c // num_heads)
- return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head
-
- def _recombine_heads(self, x: Tensor) -> Tensor:
- b, n_heads, n_tokens, c_per_head = x.shape
- x = x.transpose(1, 2)
- return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C
-
- def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor:
- # Input projections
- q = self.q_proj(q)
- k = self.k_proj(k)
- v = self.v_proj(v)
-
- # Separate into heads
- q = self._separate_heads(q, self.num_heads)
- k = self._separate_heads(k, self.num_heads)
- v = self._separate_heads(v, self.num_heads)
-
- # Attention
- _, _, _, c_per_head = q.shape
- attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens
- attn = attn / math.sqrt(c_per_head)
- attn = torch.softmax(attn, dim=-1)
-
- # Get output
- out = attn @ v
- out = self._recombine_heads(out)
- out = self.out_proj(out)
-
- return out
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/convert_convbert_original_tf1_checkpoint_to_pytorch_and_tf2.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/convert_convbert_original_tf1_checkpoint_to_pytorch_and_tf2.py
deleted file mode 100644
index 3d4ff779874b30b0c094c596cedaca597e03ed36..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/convbert/convert_convbert_original_tf1_checkpoint_to_pytorch_and_tf2.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# coding=utf-8
-# Copyright 2020 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Convert ConvBERT checkpoint."""
-
-import argparse
-
-from transformers import ConvBertConfig, ConvBertModel, TFConvBertModel, load_tf_weights_in_convbert
-from transformers.utils import logging
-
-
-logging.set_verbosity_info()
-
-
-def convert_orig_tf1_checkpoint_to_pytorch(tf_checkpoint_path, convbert_config_file, pytorch_dump_path):
- conf = ConvBertConfig.from_json_file(convbert_config_file)
- model = ConvBertModel(conf)
-
- model = load_tf_weights_in_convbert(model, conf, tf_checkpoint_path)
- model.save_pretrained(pytorch_dump_path)
-
- tf_model = TFConvBertModel.from_pretrained(pytorch_dump_path, from_pt=True)
- tf_model.save_pretrained(pytorch_dump_path)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- # Required parameters
- parser.add_argument(
- "--tf_checkpoint_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path."
- )
- parser.add_argument(
- "--convbert_config_file",
- default=None,
- type=str,
- required=True,
- help=(
- "The config json file corresponding to the pre-trained ConvBERT model. \n"
- "This specifies the model architecture."
- ),
- )
- parser.add_argument(
- "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model."
- )
- args = parser.parse_args()
- convert_orig_tf1_checkpoint_to_pytorch(args.tf_checkpoint_path, args.convbert_config_file, args.pytorch_dump_path)
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deta/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deta/__init__.py
deleted file mode 100644
index 2d25a6a71602b38a48b23de4ab227969217ae16e..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/deta/__init__.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import TYPE_CHECKING
-
-from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available, is_vision_available
-
-
-_import_structure = {
- "configuration_deta": ["DETA_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetaConfig"],
-}
-
-try:
- if not is_vision_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["image_processing_deta"] = ["DetaImageProcessor"]
-
-try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["modeling_deta"] = [
- "DETA_PRETRAINED_MODEL_ARCHIVE_LIST",
- "DetaForObjectDetection",
- "DetaModel",
- "DetaPreTrainedModel",
- ]
-
-
-if TYPE_CHECKING:
- from .configuration_deta import DETA_PRETRAINED_CONFIG_ARCHIVE_MAP, DetaConfig
-
- try:
- if not is_vision_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .image_processing_deta import DetaImageProcessor
-
- try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .modeling_deta import (
- DETA_PRETRAINED_MODEL_ARCHIVE_LIST,
- DetaForObjectDetection,
- DetaModel,
- DetaPreTrainedModel,
- )
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neo/configuration_gpt_neo.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neo/configuration_gpt_neo.py
deleted file mode 100644
index 9b84b18e26c084179aa2528c301a99245b187165..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_neo/configuration_gpt_neo.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" GPT Neo model configuration"""
-
-from collections import OrderedDict
-from typing import Any, Mapping, Optional
-
-from ... import PreTrainedTokenizer, TensorType, is_torch_available
-from ...configuration_utils import PretrainedConfig
-from ...onnx import OnnxConfigWithPast
-from ...utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-GPT_NEO_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "EleutherAI/gpt-neo-1.3B": "https://huggingface.co/EleutherAI/gpt-neo-1.3B/resolve/main/config.json",
- # See all GPTNeo models at https://huggingface.co/models?filter=gpt_neo
-}
-
-
-class GPTNeoConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`GPTNeoModel`]. It is used to instantiate a GPT
- Neo model according to the specified arguments, defining the model architecture. Instantiating a configuration with
- the defaults will yield a similar configuration to that of the GPTNeo
- [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
-
- Args:
- vocab_size (`int`, *optional*, defaults to 50257):
- Vocabulary size of the GPT Neo model. Defines the number of different tokens that can be represented by the
- `inputs_ids` passed when calling [`GPTNeoModel`]. Vocabulary size of the model. Defines the different
- tokens that can be represented by the *inputs_ids* passed to the forward method of [`GPTNeoModel`].
- max_position_embeddings (`int`, *optional*, defaults to 2048):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- hidden_size (`int`, *optional*, defaults to 2048):
- Dimensionality of the encoder layers and the pooler layer.
- num_layers (`int`, *optional*, defaults to 24):
- Number of hidden layers in the Transformer encoder.
- attention_types (`List`, *optional*, defaults to `[[['global', 'local'], 12]]`):
- The type of attention for each layer in a `List` of the following format `[[["attention_type"],
- num_layerss]]` e.g. for a 24 layer model `[[["global"], 24]]` or `[[["global", "local"], 12]]` Choose the
- value of `attention_type` from `["global", "local"]`
- num_heads (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer encoder.
- intermediate_size (`int`, *optional*, defaults to 8192):
- Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
- window_size (`int`, *optional*, defaults to 256):
- The size of the sliding window for local attention.
- activation_function (`str` or `function`, *optional*, defaults to `"gelu_new"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"selu"` and `"gelu_new"` are supported.
- resid_dropout (`float`, *optional*, defaults to 0.0):
- Residual dropout used in the attention pattern.
- embed_dropout (`float`, *optional*, defaults to 0.0):
- The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- classifier_dropout (`float`, *optional*, defaults to 0.1):
- Argument used when doing token classification, used in the model [`GPTNeoForTokenClassification`]. The
- dropout ratio for the hidden layer.
- layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
- The epsilon used by the layer normalization layers.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether or not the model should return the last key/values attentions (not used by all models). Only
- relevant if `config.is_decoder=True`.
- bos_token_id (`int`, *optional*, defaults to 50256):
- The id of the beginning of sentence token in the vocabulary.
- eos_token_id (`int`, *optional*, defaults to 50256):
- The id of the end of sentence token in the vocabulary.
-
- Example:
-
- ```python
- >>> from transformers import GPTNeoConfig, GPTNeoModel
-
- >>> # Initializing a GPTNeo EleutherAI/gpt-neo-1.3B style configuration
- >>> configuration = GPTNeoConfig()
-
- >>> # Initializing a model (with random weights) from the EleutherAI/gpt-neo-1.3B style configuration
- >>> model = GPTNeoModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "gpt_neo"
- keys_to_ignore_at_inference = ["past_key_values"]
- attribute_map = {"num_attention_heads": "num_heads", "num_hidden_layers": "num_layers"}
-
- def __init__(
- self,
- vocab_size=50257,
- max_position_embeddings=2048,
- hidden_size=2048,
- num_layers=24,
- attention_types=[[["global", "local"], 12]],
- num_heads=16,
- intermediate_size=None,
- window_size=256,
- activation_function="gelu_new",
- resid_dropout=0.0,
- embed_dropout=0.0,
- attention_dropout=0.0,
- classifier_dropout=0.1,
- layer_norm_epsilon=1e-5,
- initializer_range=0.02,
- use_cache=True,
- bos_token_id=50256,
- eos_token_id=50256,
- **kwargs,
- ):
- self.vocab_size = vocab_size
- self.max_position_embeddings = max_position_embeddings
- self.hidden_size = hidden_size
- self.num_layers = num_layers
- self.num_heads = num_heads
- self.intermediate_size = intermediate_size
- self.window_size = window_size
- self.activation_function = activation_function
- self.resid_dropout = resid_dropout
- self.embed_dropout = embed_dropout
- self.attention_dropout = attention_dropout
- self.classifier_dropout = classifier_dropout
- self.layer_norm_epsilon = layer_norm_epsilon
- self.initializer_range = initializer_range
- self.use_cache = use_cache
-
- self.bos_token_id = bos_token_id
- self.eos_token_id = eos_token_id
-
- self.attention_types = attention_types
- self.attention_layers = self.expand_attention_types_params(attention_types)
-
- if len(self.attention_layers) != self.num_layers:
- raise ValueError(
- "Configuration for convolutional module is incorrect. "
- "It is required that `len(config.attention_layers)` == `config.num_layers` "
- f"but is `len(config.attention_layers) = {len(self.attention_layers)}`, "
- f"`config.num_layers = {self.num_layers}`. "
- "`config.attention_layers` is prepared using `config.attention_types`. "
- "Please verify the value of `config.attention_types` argument."
- )
-
- super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
-
- @staticmethod
- def expand_attention_types_params(attention_types):
- attentions = []
- for item in attention_types:
- for _ in range(item[1]):
- attentions.extend(item[0])
- return attentions
-
-
-def custom_unfold(input, dimension, size, step):
- """Custom torch.Tensor.unfold implementation to enable the export to ONNX."""
- import torch
-
- shape = input.size()
- rank = len(shape)
- sizedim = shape[dimension]
-
- low_indices = torch.arange(0, sizedim, step)
- min_length = torch.div(sizedim - size, step, rounding_mode="floor") + 1
- indices = torch.arange(size) + low_indices[:min_length][:, None]
-
- s = [slice(None)] * rank
- s[dimension] = indices
- sliced = input[s]
-
- perm = list(range(0, rank + 1))
- perm.append(perm.pop(dimension + 1))
-
- return sliced.permute(perm)
-
-
-def custom_get_block_length_and_num_blocks(seq_length, window_size):
- """
- Custom implementation for GPTNeoAttentionMixin._get_block_length_and_num_blocks to enable the export to ONNX as
- original implementation uses Python variables and control flow.
- """
- import torch
-
- candidates = torch.arange(1, window_size)
- remainders = torch.remainder(seq_length, candidates)
- divisor_indices = remainders == 0
- divisors = candidates[divisor_indices]
- largest_divisor = torch.max(divisors)
- return largest_divisor, torch.div(seq_length, largest_divisor, rounding_mode="floor")
-
-
-class GPTNeoOnnxConfig(OnnxConfigWithPast):
- @property
- def inputs(self) -> Mapping[str, Mapping[int, str]]:
- common_inputs = OrderedDict({"input_ids": {0: "batch", 1: "sequence"}})
- if self.use_past:
- self.fill_with_past_key_values_(common_inputs, direction="inputs")
- common_inputs["attention_mask"] = {0: "batch", 1: "past_sequence + sequence"}
- else:
- common_inputs["attention_mask"] = {0: "batch", 1: "sequence"}
-
- return common_inputs
-
- @property
- def num_attention_heads(self) -> int:
- return self._config.num_heads
-
- def generate_dummy_inputs(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
- common_inputs = super(OnnxConfigWithPast, self).generate_dummy_inputs(
- tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
- )
-
- # We need to order the input in the way they appears in the forward()
- ordered_inputs = OrderedDict({"input_ids": common_inputs["input_ids"]})
-
- # Need to add the past_keys
- if self.use_past:
- if not is_torch_available():
- raise ValueError("Cannot generate dummy past_keys inputs without PyTorch installed.")
- else:
- import torch
-
- batch, seqlen = common_inputs["input_ids"].shape
- # Not using the same length for past_key_values
- past_key_values_length = seqlen + 2
- past_shape = (
- batch,
- self.num_attention_heads,
- past_key_values_length,
- self._config.hidden_size // self.num_attention_heads,
- )
- ordered_inputs["past_key_values"] = [
- (torch.zeros(past_shape), torch.zeros(past_shape)) for _ in range(self.num_layers)
- ]
-
- ordered_inputs["attention_mask"] = common_inputs["attention_mask"]
- if self.use_past:
- mask_dtype = ordered_inputs["attention_mask"].dtype
- ordered_inputs["attention_mask"] = torch.cat(
- [ordered_inputs["attention_mask"], torch.ones(batch, past_key_values_length, dtype=mask_dtype)], dim=1
- )
-
- return ordered_inputs
-
- @property
- def default_onnx_opset(self) -> int:
- return 13
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/configuration_mobilevit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/configuration_mobilevit.py
deleted file mode 100644
index a4aafe997eb28fac6a985f94ae4036cc958f067e..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilevit/configuration_mobilevit.py
+++ /dev/null
@@ -1,184 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" MobileViT model configuration"""
-
-from collections import OrderedDict
-from typing import Mapping
-
-from packaging import version
-
-from ...configuration_utils import PretrainedConfig
-from ...onnx import OnnxConfig
-from ...utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-MOBILEVIT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "apple/mobilevit-small": "https://huggingface.co/apple/mobilevit-small/resolve/main/config.json",
- "apple/mobilevit-x-small": "https://huggingface.co/apple/mobilevit-x-small/resolve/main/config.json",
- "apple/mobilevit-xx-small": "https://huggingface.co/apple/mobilevit-xx-small/resolve/main/config.json",
- "apple/deeplabv3-mobilevit-small": (
- "https://huggingface.co/apple/deeplabv3-mobilevit-small/resolve/main/config.json"
- ),
- "apple/deeplabv3-mobilevit-x-small": (
- "https://huggingface.co/apple/deeplabv3-mobilevit-x-small/resolve/main/config.json"
- ),
- "apple/deeplabv3-mobilevit-xx-small": (
- "https://huggingface.co/apple/deeplabv3-mobilevit-xx-small/resolve/main/config.json"
- ),
- # See all MobileViT models at https://huggingface.co/models?filter=mobilevit
-}
-
-
-class MobileViTConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`MobileViTModel`]. It is used to instantiate a
- MobileViT model according to the specified arguments, defining the model architecture. Instantiating a
- configuration with the defaults will yield a similar configuration to that of the MobileViT
- [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
- Args:
- num_channels (`int`, *optional*, defaults to 3):
- The number of input channels.
- image_size (`int`, *optional*, defaults to 256):
- The size (resolution) of each image.
- patch_size (`int`, *optional*, defaults to 2):
- The size (resolution) of each patch.
- hidden_sizes (`List[int]`, *optional*, defaults to `[144, 192, 240]`):
- Dimensionality (hidden size) of the Transformer encoders at each stage.
- neck_hidden_sizes (`List[int]`, *optional*, defaults to `[16, 32, 64, 96, 128, 160, 640]`):
- The number of channels for the feature maps of the backbone.
- num_attention_heads (`int`, *optional*, defaults to 4):
- Number of attention heads for each attention layer in the Transformer encoder.
- mlp_ratio (`float`, *optional*, defaults to 2.0):
- The ratio of the number of channels in the output of the MLP to the number of channels in the input.
- expand_ratio (`float`, *optional*, defaults to 4.0):
- Expansion factor for the MobileNetv2 layers.
- hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
- The non-linear activation function (function or string) in the Transformer encoder and convolution layers.
- conv_kernel_size (`int`, *optional*, defaults to 3):
- The size of the convolutional kernel in the MobileViT layer.
- output_stride (`int`, *optional*, defaults to 32):
- The ratio of the spatial resolution of the output to the resolution of the input image.
- hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
- The dropout probabilitiy for all fully connected layers in the Transformer encoder.
- attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- classifier_dropout_prob (`float`, *optional*, defaults to 0.1):
- The dropout ratio for attached classifiers.
- initializer_range (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- layer_norm_eps (`float`, *optional*, defaults to 1e-05):
- The epsilon used by the layer normalization layers.
- qkv_bias (`bool`, *optional*, defaults to `True`):
- Whether to add a bias to the queries, keys and values.
- aspp_out_channels (`int`, *optional*, defaults to 256):
- Number of output channels used in the ASPP layer for semantic segmentation.
- atrous_rates (`List[int]`, *optional*, defaults to `[6, 12, 18]`):
- Dilation (atrous) factors used in the ASPP layer for semantic segmentation.
- aspp_dropout_prob (`float`, *optional*, defaults to 0.1):
- The dropout ratio for the ASPP layer for semantic segmentation.
- semantic_loss_ignore_index (`int`, *optional*, defaults to 255):
- The index that is ignored by the loss function of the semantic segmentation model.
-
- Example:
-
- ```python
- >>> from transformers import MobileViTConfig, MobileViTModel
-
- >>> # Initializing a mobilevit-small style configuration
- >>> configuration = MobileViTConfig()
-
- >>> # Initializing a model from the mobilevit-small style configuration
- >>> model = MobileViTModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "mobilevit"
-
- def __init__(
- self,
- num_channels=3,
- image_size=256,
- patch_size=2,
- hidden_sizes=[144, 192, 240],
- neck_hidden_sizes=[16, 32, 64, 96, 128, 160, 640],
- num_attention_heads=4,
- mlp_ratio=2.0,
- expand_ratio=4.0,
- hidden_act="silu",
- conv_kernel_size=3,
- output_stride=32,
- hidden_dropout_prob=0.1,
- attention_probs_dropout_prob=0.0,
- classifier_dropout_prob=0.1,
- initializer_range=0.02,
- layer_norm_eps=1e-5,
- qkv_bias=True,
- aspp_out_channels=256,
- atrous_rates=[6, 12, 18],
- aspp_dropout_prob=0.1,
- semantic_loss_ignore_index=255,
- **kwargs,
- ):
- super().__init__(**kwargs)
-
- self.num_channels = num_channels
- self.image_size = image_size
- self.patch_size = patch_size
- self.hidden_sizes = hidden_sizes
- self.neck_hidden_sizes = neck_hidden_sizes
- self.num_attention_heads = num_attention_heads
- self.mlp_ratio = mlp_ratio
- self.expand_ratio = expand_ratio
- self.hidden_act = hidden_act
- self.conv_kernel_size = conv_kernel_size
- self.output_stride = output_stride
- self.hidden_dropout_prob = hidden_dropout_prob
- self.attention_probs_dropout_prob = attention_probs_dropout_prob
- self.classifier_dropout_prob = classifier_dropout_prob
- self.initializer_range = initializer_range
- self.layer_norm_eps = layer_norm_eps
- self.qkv_bias = qkv_bias
-
- # decode head attributes for semantic segmentation
- self.aspp_out_channels = aspp_out_channels
- self.atrous_rates = atrous_rates
- self.aspp_dropout_prob = aspp_dropout_prob
- self.semantic_loss_ignore_index = semantic_loss_ignore_index
-
-
-class MobileViTOnnxConfig(OnnxConfig):
- torch_onnx_minimum_version = version.parse("1.11")
-
- @property
- def inputs(self) -> Mapping[str, Mapping[int, str]]:
- return OrderedDict([("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"})])
-
- @property
- def outputs(self) -> Mapping[str, Mapping[int, str]]:
- if self.task == "image-classification":
- return OrderedDict([("logits", {0: "batch"})])
- else:
- return OrderedDict([("last_hidden_state", {0: "batch"}), ("pooler_output", {0: "batch"})])
-
- @property
- def atol_for_validation(self) -> float:
- return 1e-4
diff --git a/spaces/ykilcher/apes/calc_metrics.py b/spaces/ykilcher/apes/calc_metrics.py
deleted file mode 100644
index 03e828195a096f6f78da241b700c16f56327bdb8..0000000000000000000000000000000000000000
--- a/spaces/ykilcher/apes/calc_metrics.py
+++ /dev/null
@@ -1,190 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Calculate quality metrics for previous training run or pretrained network pickle."""
-
-import os
-import click
-import json
-import tempfile
-import copy
-import torch
-import dnnlib
-
-import legacy
-from metrics import metric_main
-from metrics import metric_utils
-from torch_utils import training_stats
-from torch_utils import custom_ops
-from torch_utils import misc
-
-#----------------------------------------------------------------------------
-
-def subprocess_fn(rank, args, temp_dir):
- dnnlib.util.Logger(should_flush=True)
-
- # Init torch.distributed.
- if args.num_gpus > 1:
- init_file = os.path.abspath(os.path.join(temp_dir, '.torch_distributed_init'))
- if os.name == 'nt':
- init_method = 'file:///' + init_file.replace('\\', '/')
- torch.distributed.init_process_group(backend='gloo', init_method=init_method, rank=rank, world_size=args.num_gpus)
- else:
- init_method = f'file://{init_file}'
- torch.distributed.init_process_group(backend='nccl', init_method=init_method, rank=rank, world_size=args.num_gpus)
-
- # Init torch_utils.
- sync_device = torch.device('cuda', rank) if args.num_gpus > 1 else None
- training_stats.init_multiprocessing(rank=rank, sync_device=sync_device)
- if rank != 0 or not args.verbose:
- custom_ops.verbosity = 'none'
-
- # Print network summary.
- device = torch.device('cuda', rank)
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = False
- torch.backends.cudnn.allow_tf32 = False
- G = copy.deepcopy(args.G).eval().requires_grad_(False).to(device)
- if rank == 0 and args.verbose:
- z = torch.empty([1, G.z_dim], device=device)
- c = torch.empty([1, G.c_dim], device=device)
- misc.print_module_summary(G, [z, c])
-
- # Calculate each metric.
- for metric in args.metrics:
- if rank == 0 and args.verbose:
- print(f'Calculating {metric}...')
- progress = metric_utils.ProgressMonitor(verbose=args.verbose)
- result_dict = metric_main.calc_metric(metric=metric, G=G, dataset_kwargs=args.dataset_kwargs,
- num_gpus=args.num_gpus, rank=rank, device=device, progress=progress)
- if rank == 0:
- metric_main.report_metric(result_dict, run_dir=args.run_dir, snapshot_pkl=args.network_pkl)
- if rank == 0 and args.verbose:
- print()
-
- # Done.
- if rank == 0 and args.verbose:
- print('Exiting...')
-
-#----------------------------------------------------------------------------
-
-class CommaSeparatedList(click.ParamType):
- name = 'list'
-
- def convert(self, value, param, ctx):
- _ = param, ctx
- if value is None or value.lower() == 'none' or value == '':
- return []
- return value.split(',')
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.pass_context
-@click.option('network_pkl', '--network', help='Network pickle filename or URL', metavar='PATH', required=True)
-@click.option('--metrics', help='Comma-separated list or "none"', type=CommaSeparatedList(), default='fid50k_full', show_default=True)
-@click.option('--data', help='Dataset to evaluate metrics against (directory or zip) [default: same as training data]', metavar='PATH')
-@click.option('--mirror', help='Whether the dataset was augmented with x-flips during training [default: look up]', type=bool, metavar='BOOL')
-@click.option('--gpus', help='Number of GPUs to use', type=int, default=1, metavar='INT', show_default=True)
-@click.option('--verbose', help='Print optional information', type=bool, default=True, metavar='BOOL', show_default=True)
-
-def calc_metrics(ctx, network_pkl, metrics, data, mirror, gpus, verbose):
- """Calculate quality metrics for previous training run or pretrained network pickle.
-
- Examples:
-
- \b
- # Previous training run: look up options automatically, save result to JSONL file.
- python calc_metrics.py --metrics=pr50k3_full \\
- --network=~/training-runs/00000-ffhq10k-res64-auto1/network-snapshot-000000.pkl
-
- \b
- # Pre-trained network pickle: specify dataset explicitly, print result to stdout.
- python calc_metrics.py --metrics=fid50k_full --data=~/datasets/ffhq.zip --mirror=1 \\
- --network=https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl
-
- Available metrics:
-
- \b
- ADA paper:
- fid50k_full Frechet inception distance against the full dataset.
- kid50k_full Kernel inception distance against the full dataset.
- pr50k3_full Precision and recall againt the full dataset.
- is50k Inception score for CIFAR-10.
-
- \b
- StyleGAN and StyleGAN2 papers:
- fid50k Frechet inception distance against 50k real images.
- kid50k Kernel inception distance against 50k real images.
- pr50k3 Precision and recall against 50k real images.
- ppl2_wend Perceptual path length in W at path endpoints against full image.
- ppl_zfull Perceptual path length in Z for full paths against cropped image.
- ppl_wfull Perceptual path length in W for full paths against cropped image.
- ppl_zend Perceptual path length in Z at path endpoints against cropped image.
- ppl_wend Perceptual path length in W at path endpoints against cropped image.
- """
- dnnlib.util.Logger(should_flush=True)
-
- # Validate arguments.
- args = dnnlib.EasyDict(metrics=metrics, num_gpus=gpus, network_pkl=network_pkl, verbose=verbose)
- if not all(metric_main.is_valid_metric(metric) for metric in args.metrics):
- ctx.fail('\n'.join(['--metrics can only contain the following values:'] + metric_main.list_valid_metrics()))
- if not args.num_gpus >= 1:
- ctx.fail('--gpus must be at least 1')
-
- # Load network.
- if not dnnlib.util.is_url(network_pkl, allow_file_urls=True) and not os.path.isfile(network_pkl):
- ctx.fail('--network must point to a file or URL')
- if args.verbose:
- print(f'Loading network from "{network_pkl}"...')
- with dnnlib.util.open_url(network_pkl, verbose=args.verbose) as f:
- network_dict = legacy.load_network_pkl(f)
- args.G = network_dict['G_ema'] # subclass of torch.nn.Module
-
- # Initialize dataset options.
- if data is not None:
- args.dataset_kwargs = dnnlib.EasyDict(class_name='training.dataset.ImageFolderDataset', path=data)
- elif network_dict['training_set_kwargs'] is not None:
- args.dataset_kwargs = dnnlib.EasyDict(network_dict['training_set_kwargs'])
- else:
- ctx.fail('Could not look up dataset options; please specify --data')
-
- # Finalize dataset options.
- args.dataset_kwargs.resolution = args.G.img_resolution
- args.dataset_kwargs.use_labels = (args.G.c_dim != 0)
- if mirror is not None:
- args.dataset_kwargs.xflip = mirror
-
- # Print dataset options.
- if args.verbose:
- print('Dataset options:')
- print(json.dumps(args.dataset_kwargs, indent=2))
-
- # Locate run dir.
- args.run_dir = None
- if os.path.isfile(network_pkl):
- pkl_dir = os.path.dirname(network_pkl)
- if os.path.isfile(os.path.join(pkl_dir, 'training_options.json')):
- args.run_dir = pkl_dir
-
- # Launch processes.
- if args.verbose:
- print('Launching processes...')
- torch.multiprocessing.set_start_method('spawn')
- with tempfile.TemporaryDirectory() as temp_dir:
- if args.num_gpus == 1:
- subprocess_fn(rank=0, args=args, temp_dir=temp_dir)
- else:
- torch.multiprocessing.spawn(fn=subprocess_fn, args=(args, temp_dir), nprocs=args.num_gpus)
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- calc_metrics() # pylint: disable=no-value-for-parameter
-
-#----------------------------------------------------------------------------
diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/modules/losses.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/modules/losses.py
deleted file mode 100644
index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/modules/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import modules.commons as commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
- #print(logs_p)
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vdecoder/hifiganwithsnake/env.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vdecoder/hifiganwithsnake/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vdecoder/hifiganwithsnake/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/model.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/model.py
deleted file mode 100644
index cb3781c17a1e78a33bf62246e5134e8512206d0d..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/vencoder/whisper/model.py
+++ /dev/null
@@ -1,269 +0,0 @@
-from dataclasses import dataclass
-from typing import Dict
-from typing import Iterable, Optional
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import Tensor
-from torch import nn
-
-from .decoding import detect_language as detect_language_function, decode as decode_function
-
-
-@dataclass
-class ModelDimensions:
- n_mels: int
- n_audio_ctx: int
- n_audio_state: int
- n_audio_head: int
- n_audio_layer: int
- n_vocab: int
- n_text_ctx: int
- n_text_state: int
- n_text_head: int
- n_text_layer: int
-
-
-class LayerNorm(nn.LayerNorm):
- def forward(self, x: Tensor) -> Tensor:
- return super().forward(x.float()).type(x.dtype)
-
-
-class Linear(nn.Linear):
- def forward(self, x: Tensor) -> Tensor:
- return F.linear(
- x, self.weight.to(x.dtype), None if self.bias is None else self.bias.to(x.dtype)
- )
-
-
-class Conv1d(nn.Conv1d):
- def _conv_forward(self, x: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor:
- return super()._conv_forward(
- x, weight.to(x.dtype), None if bias is None else bias.to(x.dtype)
- )
-
-
-def sinusoids(length, channels, max_timescale=10000):
- """Returns sinusoids for positional embedding"""
- assert channels % 2 == 0
- log_timescale_increment = np.log(max_timescale) / (channels // 2 - 1)
- inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2))
- scaled_time = torch.arange(length)[:, np.newaxis] * inv_timescales[np.newaxis, :]
- return torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=1)
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, n_state: int, n_head: int):
- super().__init__()
- self.n_head = n_head
- self.query = Linear(n_state, n_state)
- self.key = Linear(n_state, n_state, bias=False)
- self.value = Linear(n_state, n_state)
- self.out = Linear(n_state, n_state)
-
- def forward(
- self,
- x: Tensor,
- xa: Optional[Tensor] = None,
- mask: Optional[Tensor] = None,
- kv_cache: Optional[dict] = None,
- ):
- q = self.query(x)
-
- if kv_cache is None or xa is None or self.key not in kv_cache:
- # hooks, if installed (i.e. kv_cache is not None), will prepend the cached kv tensors;
- # otherwise, perform key/value projections for self- or cross-attention as usual.
- k = self.key(x if xa is None else xa)
- v = self.value(x if xa is None else xa)
- else:
- # for cross-attention, calculate keys and values once and reuse in subsequent calls.
- k = kv_cache[self.key]
- v = kv_cache[self.value]
-
- wv, qk = self.qkv_attention(q, k, v, mask)
- return self.out(wv), qk
-
- def qkv_attention(self, q: Tensor, k: Tensor, v: Tensor, mask: Optional[Tensor] = None):
- n_batch, n_ctx, n_state = q.shape
- scale = (n_state // self.n_head) ** -0.25
- q = q.view(*q.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) * scale
- k = k.view(*k.shape[:2], self.n_head, -1).permute(0, 2, 3, 1) * scale
- v = v.view(*v.shape[:2], self.n_head, -1).permute(0, 2, 1, 3)
-
- qk = q @ k
- if mask is not None:
- qk = qk + mask[:n_ctx, :n_ctx]
- qk = qk.float()
-
- w = F.softmax(qk, dim=-1).to(q.dtype)
- return (w @ v).permute(0, 2, 1, 3).flatten(start_dim=2), qk.detach()
-
-
-class ResidualAttentionBlock(nn.Module):
- def __init__(self, n_state: int, n_head: int, cross_attention: bool = False):
- super().__init__()
-
- self.attn = MultiHeadAttention(n_state, n_head)
- self.attn_ln = LayerNorm(n_state)
-
- self.cross_attn = MultiHeadAttention(n_state, n_head) if cross_attention else None
- self.cross_attn_ln = LayerNorm(n_state) if cross_attention else None
-
- n_mlp = n_state * 4
- self.mlp = nn.Sequential(Linear(n_state, n_mlp), nn.GELU(), Linear(n_mlp, n_state))
- self.mlp_ln = LayerNorm(n_state)
-
- def forward(
- self,
- x: Tensor,
- xa: Optional[Tensor] = None,
- mask: Optional[Tensor] = None,
- kv_cache: Optional[dict] = None,
- ):
- x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache)[0]
- if self.cross_attn:
- x = x + self.cross_attn(self.cross_attn_ln(x), xa, kv_cache=kv_cache)[0]
- x = x + self.mlp(self.mlp_ln(x))
- return x
-
-
-class AudioEncoder(nn.Module):
- def __init__(self, n_mels: int, n_ctx: int, n_state: int, n_head: int, n_layer: int):
- super().__init__()
- self.conv1 = Conv1d(n_mels, n_state, kernel_size=3, padding=1)
- self.conv2 = Conv1d(n_state, n_state, kernel_size=3, stride=2, padding=1)
- self.register_buffer("positional_embedding", sinusoids(n_ctx, n_state))
-
- self.blocks: Iterable[ResidualAttentionBlock] = nn.ModuleList(
- [ResidualAttentionBlock(n_state, n_head) for _ in range(n_layer)]
- )
- self.ln_post = LayerNorm(n_state)
-
- def forward(self, x: Tensor):
- """
- x : torch.Tensor, shape = (batch_size, n_mels, n_ctx)
- the mel spectrogram of the audio
- """
- x = F.gelu(self.conv1(x))
- x = F.gelu(self.conv2(x))
- x = x.permute(0, 2, 1)
-
- len_x = x.shape[1]
- len_e = self.positional_embedding.shape[0]
- assert len_x <= len_e, "incorrect audio shape"
- pos_e = self.positional_embedding[:len_x, :]
- x = (x + pos_e).to(x.dtype)
-
- for block in self.blocks:
- x = block(x)
-
- x = self.ln_post(x)
- return x
-
-
-class TextDecoder(nn.Module):
- def __init__(self, n_vocab: int, n_ctx: int, n_state: int, n_head: int, n_layer: int):
- super().__init__()
-
- self.token_embedding = nn.Embedding(n_vocab, n_state)
- self.positional_embedding = nn.Parameter(torch.empty(n_ctx, n_state))
-
- self.blocks: Iterable[ResidualAttentionBlock] = nn.ModuleList(
- [ResidualAttentionBlock(n_state, n_head, cross_attention=True) for _ in range(n_layer)]
- )
- self.ln = LayerNorm(n_state)
-
- mask = torch.empty(n_ctx, n_ctx).fill_(-np.inf).triu_(1)
- self.register_buffer("mask", mask, persistent=False)
-
- def forward(self, x: Tensor, xa: Tensor, kv_cache: Optional[dict] = None):
- """
- x : torch.LongTensor, shape = (batch_size, <= n_ctx)
- the text tokens
- xa : torch.Tensor, shape = (batch_size, n_mels, n_audio_ctx)
- the encoded audio features to be attended on
- """
- offset = next(iter(kv_cache.values())).shape[1] if kv_cache else 0
- x = self.token_embedding(x) + self.positional_embedding[offset : offset + x.shape[-1]]
- x = x.to(xa.dtype)
-
- for block in self.blocks:
- x = block(x, xa, mask=self.mask, kv_cache=kv_cache)
-
- x = self.ln(x)
- logits = (x @ torch.transpose(self.token_embedding.weight.to(x.dtype), 0, 1)).float()
-
- return logits
-
-
-class Whisper(nn.Module):
- def __init__(self, dims: ModelDimensions):
- super().__init__()
- self.dims = dims
- self.encoder = AudioEncoder(
- self.dims.n_mels,
- self.dims.n_audio_ctx,
- self.dims.n_audio_state,
- self.dims.n_audio_head,
- self.dims.n_audio_layer,
- )
- self.decoder = TextDecoder(
- self.dims.n_vocab,
- self.dims.n_text_ctx,
- self.dims.n_text_state,
- self.dims.n_text_head,
- self.dims.n_text_layer,
- )
-
- def embed_audio(self, mel: torch.Tensor):
- return self.encoder(mel)
-
- def logits(self, tokens: torch.Tensor, audio_features: torch.Tensor):
- return self.decoder(tokens, audio_features)
-
- def forward(self, mel: torch.Tensor, tokens: torch.Tensor) -> Dict[str, torch.Tensor]:
- return self.decoder(tokens, self.encoder(mel))
-
- @property
- def device(self):
- return next(self.parameters()).device
-
- @property
- def is_multilingual(self):
- return self.dims.n_vocab == 51865
-
- def install_kv_cache_hooks(self, cache: Optional[dict] = None):
- """
- The `MultiHeadAttention` module optionally accepts `kv_cache` which stores the key and value
- tensors calculated for the previous positions. This method returns a dictionary that stores
- all caches, and the necessary hooks for the key and value projection modules that save the
- intermediate tensors to be reused during later calculations.
-
- Returns
- -------
- cache : Dict[nn.Module, torch.Tensor]
- A dictionary object mapping the key/value projection modules to its cache
- hooks : List[RemovableHandle]
- List of PyTorch RemovableHandle objects to stop the hooks to be called
- """
- cache = {**cache} if cache is not None else {}
- hooks = []
-
- def save_to_cache(module, _, output):
- if module not in cache or output.shape[1] > self.decoder.positional_embedding.shape[0]:
- cache[module] = output # save as-is, for the first token or cross attention
- else:
- cache[module] = torch.cat([cache[module], output], dim=1).detach()
- return cache[module]
-
- def install_hooks(layer: nn.Module):
- if isinstance(layer, MultiHeadAttention):
- hooks.append(layer.key.register_forward_hook(save_to_cache))
- hooks.append(layer.value.register_forward_hook(save_to_cache))
-
- self.decoder.apply(install_hooks)
- return cache, hooks
-
- detect_language = detect_language_function
- decode = decode_function
diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/flask_api.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/flask_api.py
deleted file mode 100644
index b3f1e06847b2711a8e5841a4c95375445470d2ee..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/flask_api.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import io
-import logging
-
-import soundfile
-import torch
-import torchaudio
-from flask import Flask, request, send_file
-from flask_cors import CORS
-
-from inference.infer_tool import Svc, RealTimeVC
-
-app = Flask(__name__)
-
-CORS(app)
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-
-@app.route("/voiceChangeModel", methods=["POST"])
-def voice_change_model():
- request_form = request.form
- wave_file = request.files.get("sample", None)
- # 变调信息
- f_pitch_change = float(request_form.get("fPitchChange", 0))
- # DAW所需的采样率
- daw_sample = int(float(request_form.get("sampleRate", 0)))
- speaker_id = int(float(request_form.get("sSpeakId", 0)))
- # http获得wav文件并转换
- input_wav_path = io.BytesIO(wave_file.read())
-
- # 模型推理
- if raw_infer:
- # out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path)
- out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0,
- auto_predict_f0=False, noice_scale=0.4, f0_filter=False)
- tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample)
- else:
- out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path, cluster_infer_ratio=0,
- auto_predict_f0=False, noice_scale=0.4, f0_filter=False)
- tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample)
- # 返回音频
- out_wav_path = io.BytesIO()
- soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav")
- out_wav_path.seek(0)
- return send_file(out_wav_path, download_name="temp.wav", as_attachment=True)
-
-
-if __name__ == '__main__':
- # 启用则为直接切片合成,False为交叉淡化方式
- # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音
- # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些
- raw_infer = True
- # 每个模型和config是唯一对应的
- model_name = "logs/32k/G_174000-Copy1.pth"
- config_name = "configs/config.json"
- cluster_model_path = "logs/44k/kmeans_10000.pt"
- svc_model = Svc(model_name, config_name, cluster_model_path=cluster_model_path)
- svc = RealTimeVC()
- # 此处与vst插件对应,不建议更改
- app.run(port=6842, host="0.0.0.0", debug=False, threaded=False)
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/README.md b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/README.md
deleted file mode 100644
index 912cc29927542bfe4258d3208cf52d73cb0ea477..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-This directory provides definitions for a few common models, dataloaders, scheduler,
-and optimizers that are often used in training.
-The definition of these objects are provided in the form of lazy instantiation:
-their arguments can be edited by users before constructing the objects.
-
-They can be imported, or loaded by `model_zoo.get_config` API in users' own configs.
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py
deleted file mode 100644
index 0604feaaf42ffd072e3cb91f395204f818fa709a..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/lvis_evaluation.py
+++ /dev/null
@@ -1,380 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import itertools
-import json
-import logging
-import os
-import pickle
-from collections import OrderedDict
-import torch
-
-import detectron2.utils.comm as comm
-from detectron2.config import CfgNode
-from detectron2.data import MetadataCatalog
-from detectron2.structures import Boxes, BoxMode, pairwise_iou
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import create_small_table
-
-from .coco_evaluation import instances_to_coco_json
-from .evaluator import DatasetEvaluator
-
-
-class LVISEvaluator(DatasetEvaluator):
- """
- Evaluate object proposal and instance detection/segmentation outputs using
- LVIS's metrics and evaluation API.
- """
-
- def __init__(
- self,
- dataset_name,
- tasks=None,
- distributed=True,
- output_dir=None,
- *,
- max_dets_per_image=None,
- ):
- """
- Args:
- dataset_name (str): name of the dataset to be evaluated.
- It must have the following corresponding metadata:
- "json_file": the path to the LVIS format annotation
- tasks (tuple[str]): tasks that can be evaluated under the given
- configuration. A task is one of "bbox", "segm".
- By default, will infer this automatically from predictions.
- distributed (True): if True, will collect results from all ranks for evaluation.
- Otherwise, will evaluate the results in the current process.
- output_dir (str): optional, an output directory to dump results.
- max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP
- This limit, by default of the LVIS dataset, is 300.
- """
- from lvis import LVIS
-
- self._logger = logging.getLogger(__name__)
-
- if tasks is not None and isinstance(tasks, CfgNode):
- self._logger.warn(
- "COCO Evaluator instantiated using config, this is deprecated behavior."
- " Please pass in explicit arguments instead."
- )
- self._tasks = None # Infering it from predictions should be better
- else:
- self._tasks = tasks
-
- self._distributed = distributed
- self._output_dir = output_dir
- self._max_dets_per_image = max_dets_per_image
-
- self._cpu_device = torch.device("cpu")
-
- self._metadata = MetadataCatalog.get(dataset_name)
- json_file = PathManager.get_local_path(self._metadata.json_file)
- self._lvis_api = LVIS(json_file)
- # Test set json files do not contain annotations (evaluation must be
- # performed using the LVIS evaluation server).
- self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0
-
- def reset(self):
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a LVIS model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- """
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
- if "proposals" in output:
- prediction["proposals"] = output["proposals"].to(self._cpu_device)
- self._predictions.append(prediction)
-
- def evaluate(self):
- if self._distributed:
- comm.synchronize()
- predictions = comm.gather(self._predictions, dst=0)
- predictions = list(itertools.chain(*predictions))
-
- if not comm.is_main_process():
- return
- else:
- predictions = self._predictions
-
- if len(predictions) == 0:
- self._logger.warning("[LVISEvaluator] Did not receive valid predictions.")
- return {}
-
- if self._output_dir:
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(self._output_dir, "instances_predictions.pth")
- with PathManager.open(file_path, "wb") as f:
- torch.save(predictions, f)
-
- self._results = OrderedDict()
- if "proposals" in predictions[0]:
- self._eval_box_proposals(predictions)
- if "instances" in predictions[0]:
- self._eval_predictions(predictions)
- # Copy so the caller can do whatever with results
- return copy.deepcopy(self._results)
-
- def _tasks_from_predictions(self, predictions):
- for pred in predictions:
- if "segmentation" in pred:
- return ("bbox", "segm")
- return ("bbox",)
-
- def _eval_predictions(self, predictions):
- """
- Evaluate predictions. Fill self._results with the metrics of the tasks.
-
- Args:
- predictions (list[dict]): list of outputs from the model
- """
- self._logger.info("Preparing results in the LVIS format ...")
- lvis_results = list(itertools.chain(*[x["instances"] for x in predictions]))
- tasks = self._tasks or self._tasks_from_predictions(lvis_results)
-
- # LVIS evaluator can be used to evaluate results for COCO dataset categories.
- # In this case `_metadata` variable will have a field with COCO-specific category mapping.
- if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
- reverse_id_mapping = {
- v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
- }
- for result in lvis_results:
- result["category_id"] = reverse_id_mapping[result["category_id"]]
- else:
- # unmap the category ids for LVIS (from 0-indexed to 1-indexed)
- for result in lvis_results:
- result["category_id"] += 1
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "lvis_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(lvis_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating predictions ...")
- for task in sorted(tasks):
- res = _evaluate_predictions_on_lvis(
- self._lvis_api,
- lvis_results,
- task,
- max_dets_per_image=self._max_dets_per_image,
- class_names=self._metadata.get("thing_classes"),
- )
- self._results[task] = res
-
- def _eval_box_proposals(self, predictions):
- """
- Evaluate the box proposals in predictions.
- Fill self._results with the metrics for "box_proposals" task.
- """
- if self._output_dir:
- # Saving generated box proposals to file.
- # Predicted box_proposals are in XYXY_ABS mode.
- bbox_mode = BoxMode.XYXY_ABS.value
- ids, boxes, objectness_logits = [], [], []
- for prediction in predictions:
- ids.append(prediction["image_id"])
- boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
- objectness_logits.append(prediction["proposals"].objectness_logits.numpy())
-
- proposal_data = {
- "boxes": boxes,
- "objectness_logits": objectness_logits,
- "ids": ids,
- "bbox_mode": bbox_mode,
- }
- with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
- pickle.dump(proposal_data, f)
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating bbox proposals ...")
- res = {}
- areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
- for limit in [100, 1000]:
- for area, suffix in areas.items():
- stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit)
- key = "AR{}@{:d}".format(suffix, limit)
- res[key] = float(stats["ar"].item() * 100)
- self._logger.info("Proposal metrics: \n" + create_small_table(res))
- self._results["box_proposals"] = res
-
-
-# inspired from Detectron:
-# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
-def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None):
- """
- Evaluate detection proposal recall metrics. This function is a much
- faster alternative to the official LVIS API recall evaluation code. However,
- it produces slightly different results.
- """
- # Record max overlap value for each gt box
- # Return vector of overlap values
- areas = {
- "all": 0,
- "small": 1,
- "medium": 2,
- "large": 3,
- "96-128": 4,
- "128-256": 5,
- "256-512": 6,
- "512-inf": 7,
- }
- area_ranges = [
- [0 ** 2, 1e5 ** 2], # all
- [0 ** 2, 32 ** 2], # small
- [32 ** 2, 96 ** 2], # medium
- [96 ** 2, 1e5 ** 2], # large
- [96 ** 2, 128 ** 2], # 96-128
- [128 ** 2, 256 ** 2], # 128-256
- [256 ** 2, 512 ** 2], # 256-512
- [512 ** 2, 1e5 ** 2],
- ] # 512-inf
- assert area in areas, "Unknown area range: {}".format(area)
- area_range = area_ranges[areas[area]]
- gt_overlaps = []
- num_pos = 0
-
- for prediction_dict in dataset_predictions:
- predictions = prediction_dict["proposals"]
-
- # sort predictions in descending order
- # TODO maybe remove this and make it explicit in the documentation
- inds = predictions.objectness_logits.sort(descending=True)[1]
- predictions = predictions[inds]
-
- ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]])
- anno = lvis_api.load_anns(ann_ids)
- gt_boxes = [
- BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno
- ]
- gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes
- gt_boxes = Boxes(gt_boxes)
- gt_areas = torch.as_tensor([obj["area"] for obj in anno])
-
- if len(gt_boxes) == 0 or len(predictions) == 0:
- continue
-
- valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
- gt_boxes = gt_boxes[valid_gt_inds]
-
- num_pos += len(gt_boxes)
-
- if len(gt_boxes) == 0:
- continue
-
- if limit is not None and len(predictions) > limit:
- predictions = predictions[:limit]
-
- overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)
-
- _gt_overlaps = torch.zeros(len(gt_boxes))
- for j in range(min(len(predictions), len(gt_boxes))):
- # find which proposal box maximally covers each gt box
- # and get the iou amount of coverage for each gt box
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
-
- # find which gt box is 'best' covered (i.e. 'best' = most iou)
- gt_ovr, gt_ind = max_overlaps.max(dim=0)
- assert gt_ovr >= 0
- # find the proposal box that covers the best covered gt box
- box_ind = argmax_overlaps[gt_ind]
- # record the iou coverage of this gt box
- _gt_overlaps[j] = overlaps[box_ind, gt_ind]
- assert _gt_overlaps[j] == gt_ovr
- # mark the proposal box and the gt box as used
- overlaps[box_ind, :] = -1
- overlaps[:, gt_ind] = -1
-
- # append recorded iou coverage level
- gt_overlaps.append(_gt_overlaps)
- gt_overlaps = (
- torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32)
- )
- gt_overlaps, _ = torch.sort(gt_overlaps)
-
- if thresholds is None:
- step = 0.05
- thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
- recalls = torch.zeros_like(thresholds)
- # compute recall for each iou threshold
- for i, t in enumerate(thresholds):
- recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
- # ar = 2 * np.trapz(recalls, thresholds)
- ar = recalls.mean()
- return {
- "ar": ar,
- "recalls": recalls,
- "thresholds": thresholds,
- "gt_overlaps": gt_overlaps,
- "num_pos": num_pos,
- }
-
-
-def _evaluate_predictions_on_lvis(
- lvis_gt, lvis_results, iou_type, max_dets_per_image=None, class_names=None
-):
- """
- Args:
- iou_type (str):
- max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP
- This limit, by default of the LVIS dataset, is 300.
- class_names (None or list[str]): if provided, will use it to predict
- per-category AP.
-
- Returns:
- a dict of {metric name: score}
- """
- metrics = {
- "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"],
- "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"],
- }[iou_type]
-
- logger = logging.getLogger(__name__)
-
- if len(lvis_results) == 0: # TODO: check if needed
- logger.warn("No predictions from the model!")
- return {metric: float("nan") for metric in metrics}
-
- if iou_type == "segm":
- lvis_results = copy.deepcopy(lvis_results)
- # When evaluating mask AP, if the results contain bbox, LVIS API will
- # use the box area as the area of the instance, instead of the mask area.
- # This leads to a different definition of small/medium/large.
- # We remove the bbox field to let mask AP use mask area.
- for c in lvis_results:
- c.pop("bbox", None)
-
- if max_dets_per_image is None:
- max_dets_per_image = 300 # Default for LVIS dataset
-
- from lvis import LVISEval, LVISResults
-
- logger.info(f"Evaluating with max detections per image = {max_dets_per_image}")
- lvis_results = LVISResults(lvis_gt, lvis_results, max_dets=max_dets_per_image)
- lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type)
- lvis_eval.run()
- lvis_eval.print_results()
-
- # Pull the standard metrics from the LVIS results
- results = lvis_eval.get_results()
- results = {metric: float(results[metric] * 100) for metric in metrics}
- logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results))
- return results
diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/transition.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/transition.js
deleted file mode 100644
index 9df9e5d3b37870dbc4d6b4368aed99311ceaef0c..0000000000000000000000000000000000000000
--- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/transition.js
+++ /dev/null
@@ -1,329 +0,0 @@
-let { list } = require('postcss')
-let parser = require('postcss-value-parser')
-
-let Browsers = require('./browsers')
-let vendor = require('./vendor')
-
-class Transition {
- constructor(prefixes) {
- this.props = ['transition', 'transition-property']
- this.prefixes = prefixes
- }
-
- /**
- * Process transition and add prefixes for all necessary properties
- */
- add(decl, result) {
- let prefix, prop
- let add = this.prefixes.add[decl.prop]
- let vendorPrefixes = this.ruleVendorPrefixes(decl)
- let declPrefixes = vendorPrefixes || (add && add.prefixes) || []
-
- let params = this.parse(decl.value)
- let names = params.map(i => this.findProp(i))
- let added = []
-
- if (names.some(i => i[0] === '-')) {
- return
- }
-
- for (let param of params) {
- prop = this.findProp(param)
- if (prop[0] === '-') continue
-
- let prefixer = this.prefixes.add[prop]
- if (!prefixer || !prefixer.prefixes) continue
-
- for (prefix of prefixer.prefixes) {
- if (vendorPrefixes && !vendorPrefixes.some(p => prefix.includes(p))) {
- continue
- }
-
- let prefixed = this.prefixes.prefixed(prop, prefix)
- if (prefixed !== '-ms-transform' && !names.includes(prefixed)) {
- if (!this.disabled(prop, prefix)) {
- added.push(this.clone(prop, prefixed, param))
- }
- }
- }
- }
-
- params = params.concat(added)
- let value = this.stringify(params)
-
- let webkitClean = this.stringify(
- this.cleanFromUnprefixed(params, '-webkit-')
- )
- if (declPrefixes.includes('-webkit-')) {
- this.cloneBefore(decl, `-webkit-${decl.prop}`, webkitClean)
- }
- this.cloneBefore(decl, decl.prop, webkitClean)
- if (declPrefixes.includes('-o-')) {
- let operaClean = this.stringify(this.cleanFromUnprefixed(params, '-o-'))
- this.cloneBefore(decl, `-o-${decl.prop}`, operaClean)
- }
-
- for (prefix of declPrefixes) {
- if (prefix !== '-webkit-' && prefix !== '-o-') {
- let prefixValue = this.stringify(
- this.cleanOtherPrefixes(params, prefix)
- )
- this.cloneBefore(decl, prefix + decl.prop, prefixValue)
- }
- }
-
- if (value !== decl.value && !this.already(decl, decl.prop, value)) {
- this.checkForWarning(result, decl)
- decl.cloneBefore()
- decl.value = value
- }
- }
-
- /**
- * Find property name
- */
- findProp(param) {
- let prop = param[0].value
- if (/^\d/.test(prop)) {
- for (let [i, token] of param.entries()) {
- if (i !== 0 && token.type === 'word') {
- return token.value
- }
- }
- }
- return prop
- }
-
- /**
- * Does we already have this declaration
- */
- already(decl, prop, value) {
- return decl.parent.some(i => i.prop === prop && i.value === value)
- }
-
- /**
- * Add declaration if it is not exist
- */
- cloneBefore(decl, prop, value) {
- if (!this.already(decl, prop, value)) {
- decl.cloneBefore({ prop, value })
- }
- }
-
- /**
- * Show transition-property warning
- */
- checkForWarning(result, decl) {
- if (decl.prop !== 'transition-property') {
- return
- }
-
- let isPrefixed = false
- let hasAssociatedProp = false
-
- decl.parent.each(i => {
- if (i.type !== 'decl') {
- return undefined
- }
- if (i.prop.indexOf('transition-') !== 0) {
- return undefined
- }
- let values = list.comma(i.value)
- // check if current Rule's transition-property comma separated value list needs prefixes
- if (i.prop === 'transition-property') {
- values.forEach(value => {
- let lookup = this.prefixes.add[value]
- if (lookup && lookup.prefixes && lookup.prefixes.length > 0) {
- isPrefixed = true
- }
- })
- return undefined
- }
- // check if another transition-* prop in current Rule has comma separated value list
- hasAssociatedProp = hasAssociatedProp || values.length > 1
- return false
- })
-
- if (isPrefixed && hasAssociatedProp) {
- decl.warn(
- result,
- 'Replace transition-property to transition, ' +
- 'because Autoprefixer could not support ' +
- 'any cases of transition-property ' +
- 'and other transition-*'
- )
- }
- }
-
- /**
- * Process transition and remove all unnecessary properties
- */
- remove(decl) {
- let params = this.parse(decl.value)
- params = params.filter(i => {
- let prop = this.prefixes.remove[this.findProp(i)]
- return !prop || !prop.remove
- })
- let value = this.stringify(params)
-
- if (decl.value === value) {
- return
- }
-
- if (params.length === 0) {
- decl.remove()
- return
- }
-
- let double = decl.parent.some(i => {
- return i.prop === decl.prop && i.value === value
- })
- let smaller = decl.parent.some(i => {
- return i !== decl && i.prop === decl.prop && i.value.length > value.length
- })
-
- if (double || smaller) {
- decl.remove()
- return
- }
-
- decl.value = value
- }
-
- /**
- * Parse properties list to array
- */
- parse(value) {
- let ast = parser(value)
- let result = []
- let param = []
- for (let node of ast.nodes) {
- param.push(node)
- if (node.type === 'div' && node.value === ',') {
- result.push(param)
- param = []
- }
- }
- result.push(param)
- return result.filter(i => i.length > 0)
- }
-
- /**
- * Return properties string from array
- */
- stringify(params) {
- if (params.length === 0) {
- return ''
- }
- let nodes = []
- for (let param of params) {
- if (param[param.length - 1].type !== 'div') {
- param.push(this.div(params))
- }
- nodes = nodes.concat(param)
- }
- if (nodes[0].type === 'div') {
- nodes = nodes.slice(1)
- }
- if (nodes[nodes.length - 1].type === 'div') {
- nodes = nodes.slice(0, +-2 + 1 || undefined)
- }
- return parser.stringify({ nodes })
- }
-
- /**
- * Return new param array with different name
- */
- clone(origin, name, param) {
- let result = []
- let changed = false
- for (let i of param) {
- if (!changed && i.type === 'word' && i.value === origin) {
- result.push({ type: 'word', value: name })
- changed = true
- } else {
- result.push(i)
- }
- }
- return result
- }
-
- /**
- * Find or create separator
- */
- div(params) {
- for (let param of params) {
- for (let node of param) {
- if (node.type === 'div' && node.value === ',') {
- return node
- }
- }
- }
- return { type: 'div', value: ',', after: ' ' }
- }
-
- cleanOtherPrefixes(params, prefix) {
- return params.filter(param => {
- let current = vendor.prefix(this.findProp(param))
- return current === '' || current === prefix
- })
- }
-
- /**
- * Remove all non-webkit prefixes and unprefixed params if we have prefixed
- */
- cleanFromUnprefixed(params, prefix) {
- let remove = params
- .map(i => this.findProp(i))
- .filter(i => i.slice(0, prefix.length) === prefix)
- .map(i => this.prefixes.unprefixed(i))
-
- let result = []
- for (let param of params) {
- let prop = this.findProp(param)
- let p = vendor.prefix(prop)
- if (!remove.includes(prop) && (p === prefix || p === '')) {
- result.push(param)
- }
- }
- return result
- }
-
- /**
- * Check property for disabled by option
- */
- disabled(prop, prefix) {
- let other = ['order', 'justify-content', 'align-self', 'align-content']
- if (prop.includes('flex') || other.includes(prop)) {
- if (this.prefixes.options.flexbox === false) {
- return true
- }
-
- if (this.prefixes.options.flexbox === 'no-2009') {
- return prefix.includes('2009')
- }
- }
- return undefined
- }
-
- /**
- * Check if transition prop is inside vendor specific rule
- */
- ruleVendorPrefixes(decl) {
- let { parent } = decl
-
- if (parent.type !== 'rule') {
- return false
- } else if (!parent.selector.includes(':-')) {
- return false
- }
-
- let selectors = Browsers.prefixes().filter(s =>
- parent.selector.includes(':' + s)
- )
-
- return selectors.length > 0 ? selectors : false
- }
-}
-
-module.exports = Transition
diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/parse.js b/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/parse.js
deleted file mode 100644
index 950631c94a3359924b35c47557f0513cd3444b16..0000000000000000000000000000000000000000
--- a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/parse.js
+++ /dev/null
@@ -1,321 +0,0 @@
-var openParentheses = "(".charCodeAt(0);
-var closeParentheses = ")".charCodeAt(0);
-var singleQuote = "'".charCodeAt(0);
-var doubleQuote = '"'.charCodeAt(0);
-var backslash = "\\".charCodeAt(0);
-var slash = "/".charCodeAt(0);
-var comma = ",".charCodeAt(0);
-var colon = ":".charCodeAt(0);
-var star = "*".charCodeAt(0);
-var uLower = "u".charCodeAt(0);
-var uUpper = "U".charCodeAt(0);
-var plus = "+".charCodeAt(0);
-var isUnicodeRange = /^[a-f0-9?-]+$/i;
-
-module.exports = function(input) {
- var tokens = [];
- var value = input;
-
- var next,
- quote,
- prev,
- token,
- escape,
- escapePos,
- whitespacePos,
- parenthesesOpenPos;
- var pos = 0;
- var code = value.charCodeAt(pos);
- var max = value.length;
- var stack = [{ nodes: tokens }];
- var balanced = 0;
- var parent;
-
- var name = "";
- var before = "";
- var after = "";
-
- while (pos < max) {
- // Whitespaces
- if (code <= 32) {
- next = pos;
- do {
- next += 1;
- code = value.charCodeAt(next);
- } while (code <= 32);
- token = value.slice(pos, next);
-
- prev = tokens[tokens.length - 1];
- if (code === closeParentheses && balanced) {
- after = token;
- } else if (prev && prev.type === "div") {
- prev.after = token;
- prev.sourceEndIndex += token.length;
- } else if (
- code === comma ||
- code === colon ||
- (code === slash &&
- value.charCodeAt(next + 1) !== star &&
- (!parent ||
- (parent && parent.type === "function" && parent.value !== "calc")))
- ) {
- before = token;
- } else {
- tokens.push({
- type: "space",
- sourceIndex: pos,
- sourceEndIndex: next,
- value: token
- });
- }
-
- pos = next;
-
- // Quotes
- } else if (code === singleQuote || code === doubleQuote) {
- next = pos;
- quote = code === singleQuote ? "'" : '"';
- token = {
- type: "string",
- sourceIndex: pos,
- quote: quote
- };
- do {
- escape = false;
- next = value.indexOf(quote, next + 1);
- if (~next) {
- escapePos = next;
- while (value.charCodeAt(escapePos - 1) === backslash) {
- escapePos -= 1;
- escape = !escape;
- }
- } else {
- value += quote;
- next = value.length - 1;
- token.unclosed = true;
- }
- } while (escape);
- token.value = value.slice(pos + 1, next);
- token.sourceEndIndex = token.unclosed ? next : next + 1;
- tokens.push(token);
- pos = next + 1;
- code = value.charCodeAt(pos);
-
- // Comments
- } else if (code === slash && value.charCodeAt(pos + 1) === star) {
- next = value.indexOf("*/", pos);
-
- token = {
- type: "comment",
- sourceIndex: pos,
- sourceEndIndex: next + 2
- };
-
- if (next === -1) {
- token.unclosed = true;
- next = value.length;
- token.sourceEndIndex = next;
- }
-
- token.value = value.slice(pos + 2, next);
- tokens.push(token);
-
- pos = next + 2;
- code = value.charCodeAt(pos);
-
- // Operation within calc
- } else if (
- (code === slash || code === star) &&
- parent &&
- parent.type === "function" &&
- parent.value === "calc"
- ) {
- token = value[pos];
- tokens.push({
- type: "word",
- sourceIndex: pos - before.length,
- sourceEndIndex: pos + token.length,
- value: token
- });
- pos += 1;
- code = value.charCodeAt(pos);
-
- // Dividers
- } else if (code === slash || code === comma || code === colon) {
- token = value[pos];
-
- tokens.push({
- type: "div",
- sourceIndex: pos - before.length,
- sourceEndIndex: pos + token.length,
- value: token,
- before: before,
- after: ""
- });
- before = "";
-
- pos += 1;
- code = value.charCodeAt(pos);
-
- // Open parentheses
- } else if (openParentheses === code) {
- // Whitespaces after open parentheses
- next = pos;
- do {
- next += 1;
- code = value.charCodeAt(next);
- } while (code <= 32);
- parenthesesOpenPos = pos;
- token = {
- type: "function",
- sourceIndex: pos - name.length,
- value: name,
- before: value.slice(parenthesesOpenPos + 1, next)
- };
- pos = next;
-
- if (name === "url" && code !== singleQuote && code !== doubleQuote) {
- next -= 1;
- do {
- escape = false;
- next = value.indexOf(")", next + 1);
- if (~next) {
- escapePos = next;
- while (value.charCodeAt(escapePos - 1) === backslash) {
- escapePos -= 1;
- escape = !escape;
- }
- } else {
- value += ")";
- next = value.length - 1;
- token.unclosed = true;
- }
- } while (escape);
- // Whitespaces before closed
- whitespacePos = next;
- do {
- whitespacePos -= 1;
- code = value.charCodeAt(whitespacePos);
- } while (code <= 32);
- if (parenthesesOpenPos < whitespacePos) {
- if (pos !== whitespacePos + 1) {
- token.nodes = [
- {
- type: "word",
- sourceIndex: pos,
- sourceEndIndex: whitespacePos + 1,
- value: value.slice(pos, whitespacePos + 1)
- }
- ];
- } else {
- token.nodes = [];
- }
- if (token.unclosed && whitespacePos + 1 !== next) {
- token.after = "";
- token.nodes.push({
- type: "space",
- sourceIndex: whitespacePos + 1,
- sourceEndIndex: next,
- value: value.slice(whitespacePos + 1, next)
- });
- } else {
- token.after = value.slice(whitespacePos + 1, next);
- token.sourceEndIndex = next;
- }
- } else {
- token.after = "";
- token.nodes = [];
- }
- pos = next + 1;
- token.sourceEndIndex = token.unclosed ? next : pos;
- code = value.charCodeAt(pos);
- tokens.push(token);
- } else {
- balanced += 1;
- token.after = "";
- token.sourceEndIndex = pos + 1;
- tokens.push(token);
- stack.push(token);
- tokens = token.nodes = [];
- parent = token;
- }
- name = "";
-
- // Close parentheses
- } else if (closeParentheses === code && balanced) {
- pos += 1;
- code = value.charCodeAt(pos);
-
- parent.after = after;
- parent.sourceEndIndex += after.length;
- after = "";
- balanced -= 1;
- stack[stack.length - 1].sourceEndIndex = pos;
- stack.pop();
- parent = stack[balanced];
- tokens = parent.nodes;
-
- // Words
- } else {
- next = pos;
- do {
- if (code === backslash) {
- next += 1;
- }
- next += 1;
- code = value.charCodeAt(next);
- } while (
- next < max &&
- !(
- code <= 32 ||
- code === singleQuote ||
- code === doubleQuote ||
- code === comma ||
- code === colon ||
- code === slash ||
- code === openParentheses ||
- (code === star &&
- parent &&
- parent.type === "function" &&
- parent.value === "calc") ||
- (code === slash &&
- parent.type === "function" &&
- parent.value === "calc") ||
- (code === closeParentheses && balanced)
- )
- );
- token = value.slice(pos, next);
-
- if (openParentheses === code) {
- name = token;
- } else if (
- (uLower === token.charCodeAt(0) || uUpper === token.charCodeAt(0)) &&
- plus === token.charCodeAt(1) &&
- isUnicodeRange.test(token.slice(2))
- ) {
- tokens.push({
- type: "unicode-range",
- sourceIndex: pos,
- sourceEndIndex: next,
- value: token
- });
- } else {
- tokens.push({
- type: "word",
- sourceIndex: pos,
- sourceEndIndex: next,
- value: token
- });
- }
-
- pos = next;
- }
- }
-
- for (pos = stack.length - 1; pos; pos -= 1) {
- stack[pos].unclosed = true;
- stack[pos].sourceEndIndex = value.length;
- }
-
- return stack[0].nodes;
-};
diff --git a/spaces/ysharma/OSChatbots_ChatGPT_ToeToToe/app.py b/spaces/ysharma/OSChatbots_ChatGPT_ToeToToe/app.py
deleted file mode 100644
index 5979ef418dbd9526245d2f31c78cf8db01d8da56..0000000000000000000000000000000000000000
--- a/spaces/ysharma/OSChatbots_ChatGPT_ToeToToe/app.py
+++ /dev/null
@@ -1,299 +0,0 @@
-import gradio as gr
-import json
-import requests
-import os
-from text_generation import Client, InferenceAPIClient
-
-# Load pre-trained model and tokenizer - for THUDM model
-from transformers import AutoModel, AutoTokenizer
-tokenizer_glm = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
-model_glm = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
-model_glm = model_glm.eval()
-
-# Load pre-trained model and tokenizer for Chinese to English translator
-from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
-model_chtoen = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
-tokenizer_chtoen = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
-
-#Streaming endpoint for OPENAI ChatGPT
-API_URL = "https://api.openai.com/v1/chat/completions"
-#Streaming endpoint for OPENCHATKIT
-API_URL_TGTHR = os.getenv('API_URL_TGTHR')
-
-openchat_preprompt = (
- "\n: Hi!\n: My name is Bot, model version is 0.15, part of an open-source kit for "
- "fine-tuning new bots! I was created by Together, LAION, and Ontocord.ai and the open-source "
- "community. I am not human, not evil and not alive, and thus have no thoughts and feelings, "
- "but I am programmed to be helpful, polite, honest, and friendly.\n")
-
-#Predict function for CHATGPT
-def predict_chatgpt(inputs, top_p_chatgpt, temperature_chatgpt, openai_api_key, chat_counter_chatgpt, chatbot_chatgpt=[], history=[]):
- #Define payload and header for chatgpt API
- payload = {
- "model": "gpt-3.5-turbo",
- "messages": [{"role": "user", "content": f"{inputs}"}],
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}"
- }
- #debug
- #print(f"chat_counter_chatgpt - {chat_counter_chatgpt}")
-
- #Handling the different roles for ChatGPT
- if chat_counter_chatgpt != 0 :
- messages=[]
- for data in chatbot_chatgpt:
- temp1 = {}
- temp1["role"] = "user"
- temp1["content"] = data[0]
- temp2 = {}
- temp2["role"] = "assistant"
- temp2["content"] = data[1]
- messages.append(temp1)
- messages.append(temp2)
- temp3 = {}
- temp3["role"] = "user"
- temp3["content"] = inputs
- messages.append(temp3)
- payload = {
- "model": "gpt-3.5-turbo",
- "messages": messages, #[{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature_chatgpt, #1.0,
- "top_p": top_p_chatgpt, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
-
- chat_counter_chatgpt+=1
-
- history.append(inputs)
-
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping the first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 13 and "content" in json.loads(chunk[6:])['choices'][0]["delta"]:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter_chatgpt # this resembles {chatbot: chat, state: history}
-
-
-#Predict function for OPENCHATKIT
-def predict_together(model: str,
- inputs: str,
- top_p: float,
- temperature: float,
- top_k: int,
- repetition_penalty: float,
- watermark: bool,
- chatbot,
- history,):
-
- client = Client(os.getenv("API_URL_TGTHR")) #get_client(model)
- # debug
- #print(f"^^client is - {client}")
- user_name, assistant_name = ": ", ": "
- preprompt = openchat_preprompt
- sep = '\n'
-
- history.append(inputs)
-
- past = []
- for data in chatbot:
- user_data, model_data = data
-
- if not user_data.startswith(user_name):
- user_data = user_name + user_data
- if not model_data.startswith("\n" + assistant_name):
- model_data = "\n" + assistant_name + model_data
-
- past.append(user_data + model_data.rstrip() + "\n")
-
- if not inputs.startswith(user_name):
- inputs = user_name + inputs
-
- total_inputs = preprompt + "".join(past) + inputs + "\n" + assistant_name.rstrip()
- # truncate total_inputs
- #total_inputs = total_inputs[-1000:]
-
- partial_words = ""
-
- for i, response in enumerate(client.generate_stream(
- total_inputs,
- top_p=top_p,
- top_k=top_k,
- repetition_penalty=repetition_penalty,
- watermark=watermark,
- temperature=temperature,
- max_new_tokens=500,
- stop_sequences=[user_name.rstrip(), assistant_name.rstrip()],
- )):
- if response.token.special:
- continue
-
- partial_words = partial_words + response.token.text
- if partial_words.endswith(user_name.rstrip()):
- partial_words = partial_words.rstrip(user_name.rstrip())
- if partial_words.endswith(assistant_name.rstrip()):
- partial_words = partial_words.rstrip(assistant_name.rstrip())
-
- if i == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
-
- chat = [
- (history[i].strip(), history[i + 1].strip()) for i in range(0, len(history) - 1, 2)
- ]
- yield chat, history
-
-# Define function to generate model predictions and update the history
-def predict_glm(input, history=[]):
- response, history = model_glm.chat(tokenizer_glm, input, history)
- # translate Chinese to English
- history = [(query, translate_Chinese_English(response)) for query, response in history]
- return history, history #[history] + updates
-
-def translate_Chinese_English(chinese_text):
- # translate Chinese to English
- tokenizer_chtoen.src_lang = "zh"
- encoded_zh = tokenizer_chtoen(chinese_text, return_tensors="pt")
- generated_tokens = model_chtoen.generate(**encoded_zh, forced_bos_token_id=tokenizer_chtoen.get_lang_id("en"))
- trans_eng_text = tokenizer_chtoen.batch_decode(generated_tokens, skip_special_tokens=True)
- return trans_eng_text[0]
-
-"""
-with gr.Blocks() as demo:
- chatbot = gr.Chatbot()
- state = gr.State([])
-
- with gr.Row():
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False)
-
- txt.submit(predict, [txt, state], [chatbot, state])
-
-demo.launch(debug=True)
-"""
-
-def reset_textbox():
- return gr.update(value="")
-
-def reset_chat(chatbot, state):
- # debug
- #print(f"^^chatbot value is - {chatbot}")
- #print(f"^^state value is - {state}")
- return None, []
-
-
-#title = """🔥🔥Comparison: ChatGPT & OpenChatKit 🚀A Gradio Streaming Demo Official Demo: OpenChatKit feedback app """
-title = """🔥🔥Comparison: ChatGPT & Open Sourced CHatGLM-6B 🚀A Gradio Chatbot Demo """
-description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
-```
-User:
-Assistant:
-User:
-Assistant:
-...
-```
-In this app, you can explore the outputs of multiple LLMs when prompted in similar ways.
-"""
-
-with gr.Blocks(css="""#col_container {width: 1000px; margin-left: auto; margin-right: auto;}
- #chatgpt {height: 520px; overflow: auto;}
- #chatglm {height: 520px; overflow: auto;} """ ) as demo:
- #chattogether {height: 520px; overflow: auto;} """ ) as demo:
- #clear {width: 100px; height:50px; font-size:12px}""") as demo:
- gr.HTML(title)
- with gr.Row():
- with gr.Column(scale=14):
- with gr.Box():
- with gr.Row():
- with gr.Column(scale=13):
- openai_api_key = gr.Textbox(type='password', label="Enter your OpenAI API key here for ChatGPT")
- inputs = gr.Textbox(placeholder="Hi there!", label="Type an input and press Enter ⤵️ " )
- with gr.Column(scale=1):
- b1 = gr.Button('🏃Run', elem_id = 'run').style(full_width=True)
- b2 = gr.Button('🔄Clear up Chatbots!', elem_id = 'clear').style(full_width=True)
- state_chatgpt = gr.State([])
- #state_together = gr.State([])
- state_glm = gr.State([])
-
- with gr.Box():
- with gr.Row():
- chatbot_chatgpt = gr.Chatbot(elem_id="chatgpt", label='ChatGPT API - OPENAI')
- #chatbot_together = gr.Chatbot(elem_id="chattogether", label='OpenChatKit - Text Generation')
- chatbot_glm = gr.Chatbot(elem_id="chatglm", label='THUDM-ChatGLM6B')
-
- with gr.Column(scale=2, elem_id='parameters'):
- with gr.Box():
- gr.HTML("Parameters for #OpenCHAtKit", visible=False)
- top_p = gr.Slider(minimum=-0, maximum=1.0,value=0.25, step=0.05,interactive=True, label="Top-p", visible=False)
- temperature = gr.Slider(minimum=-0, maximum=5.0, value=0.6, step=0.1, interactive=True, label="Temperature", visible=False)
- top_k = gr.Slider( minimum=1, maximum=50, value=50, step=1, interactive=True, label="Top-k", visible=False)
- repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.01, step=0.01, interactive=True, label="Repetition Penalty", visible=False)
- watermark = gr.Checkbox(value=True, label="Text watermarking", visible=False)
- model = gr.CheckboxGroup(value="Rallio67/joi2_20B_instruct_alpha",
- choices=["togethercomputer/GPT-NeoXT-Chat-Base-20B", "Rallio67/joi2_20B_instruct_alpha", "google/flan-t5-xxl", "google/flan-ul2", "bigscience/bloomz", "EleutherAI/gpt-neox-20b",],
- label="Model",visible=False,)
- temp_textbox_together = gr.Textbox(value=model.choices[0], visible=False)
-
- with gr.Box():
- gr.HTML("Parameters for OpenAI's ChatGPT")
- top_p_chatgpt = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p",)
- temperature_chatgpt = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- chat_counter_chatgpt = gr.Number(value=0, visible=False, precision=0)
-
- inputs.submit(reset_textbox, [], [inputs])
-
- inputs.submit( predict_chatgpt,
- [inputs, top_p_chatgpt, temperature_chatgpt, openai_api_key, chat_counter_chatgpt, chatbot_chatgpt, state_chatgpt],
- [chatbot_chatgpt, state_chatgpt, chat_counter_chatgpt],)
- #inputs.submit( predict_together,
- # [temp_textbox_together, inputs, top_p, temperature, top_k, repetition_penalty, watermark, chatbot_together, state_together, ],
- # [chatbot_together, state_together],)
- inputs.submit( predict_glm,
- [inputs, state_glm, ],
- [chatbot_glm, state_glm],)
- b1.click( predict_chatgpt,
- [inputs, top_p_chatgpt, temperature_chatgpt, openai_api_key, chat_counter_chatgpt, chatbot_chatgpt, state_chatgpt],
- [chatbot_chatgpt, state_chatgpt, chat_counter_chatgpt],)
- #b1.click( predict_together,
- # [temp_textbox_together, inputs, top_p, temperature, top_k, repetition_penalty, watermark, chatbot_together, state_together, ],
- # [chatbot_together, state_together],)
- b1.click( predict_glm,
- [inputs, state_glm, ],
- [chatbot_glm, state_glm],)
-
- b2.click(reset_chat, [chatbot_chatgpt, state_chatgpt], [chatbot_chatgpt, state_chatgpt])
- #b2.click(reset_chat, [chatbot_together, state_together], [chatbot_together, state_together])
- b2.click(reset_chat, [chatbot_glm, state_glm], [chatbot_glm, state_glm])
-
- gr.HTML(''' Duplicate the Space and run securely with your OpenAI API Key ''')
- gr.Markdown(description)
- demo.queue(concurrency_count=16).launch(height= 2500, debug=True)
\ No newline at end of file
diff --git a/spaces/zhang-wei-jian/docker/node_modules/debug/README.md b/spaces/zhang-wei-jian/docker/node_modules/debug/README.md
deleted file mode 100644
index e9c3e047c2b22aacd54f096af48f918217e06d84..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/debug/README.md
+++ /dev/null
@@ -1,481 +0,0 @@
-# debug
-[](https://travis-ci.org/debug-js/debug) [](https://coveralls.io/github/debug-js/debug?branch=master) [](https://visionmedia-community-slackin.now.sh/) [](#backers)
-[](#sponsors)
-
-
-
-A tiny JavaScript debugging utility modelled after Node.js core's debugging
-technique. Works in Node.js and web browsers.
-
-## Installation
-
-```bash
-$ npm install debug
-```
-
-## Usage
-
-`debug` exposes a function; simply pass this function the name of your module, and it will return a decorated version of `console.error` for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
-
-Example [_app.js_](./examples/node/app.js):
-
-```js
-var debug = require('debug')('http')
- , http = require('http')
- , name = 'My App';
-
-// fake app
-
-debug('booting %o', name);
-
-http.createServer(function(req, res){
- debug(req.method + ' ' + req.url);
- res.end('hello\n');
-}).listen(3000, function(){
- debug('listening');
-});
-
-// fake worker of some kind
-
-require('./worker');
-```
-
-Example [_worker.js_](./examples/node/worker.js):
-
-```js
-var a = require('debug')('worker:a')
- , b = require('debug')('worker:b');
-
-function work() {
- a('doing lots of uninteresting work');
- setTimeout(work, Math.random() * 1000);
-}
-
-work();
-
-function workb() {
- b('doing some work');
- setTimeout(workb, Math.random() * 2000);
-}
-
-workb();
-```
-
-The `DEBUG` environment variable is then used to enable these based on space or
-comma-delimited names.
-
-Here are some examples:
-
-
-
-
-
-#### Windows command prompt notes
-
-##### CMD
-
-On Windows the environment variable is set using the `set` command.
-
-```cmd
-set DEBUG=*,-not_this
-```
-
-Example:
-
-```cmd
-set DEBUG=* & node app.js
-```
-
-##### PowerShell (VS Code default)
-
-PowerShell uses different syntax to set environment variables.
-
-```cmd
-$env:DEBUG = "*,-not_this"
-```
-
-Example:
-
-```cmd
-$env:DEBUG='app';node app.js
-```
-
-Then, run the program to be debugged as usual.
-
-npm script example:
-```js
- "windowsDebug": "@powershell -Command $env:DEBUG='*';node app.js",
-```
-
-## Namespace Colors
-
-Every debug instance has a color generated for it based on its namespace name.
-This helps when visually parsing the debug output to identify which debug instance
-a debug line belongs to.
-
-#### Node.js
-
-In Node.js, colors are enabled when stderr is a TTY. You also _should_ install
-the [`supports-color`](https://npmjs.org/supports-color) module alongside debug,
-otherwise debug will only use a small handful of basic colors.
-
-
-
-#### Web Browser
-
-Colors are also enabled on "Web Inspectors" that understand the `%c` formatting
-option. These are WebKit web inspectors, Firefox ([since version
-31](https://hacks.mozilla.org/2014/05/editable-box-model-multiple-selection-sublime-text-keys-much-more-firefox-developer-tools-episode-31/))
-and the Firebug plugin for Firefox (any version).
-
-
-
-
-## Millisecond diff
-
-When actively developing an application it can be useful to see when the time spent between one `debug()` call and the next. Suppose for example you invoke `debug()` before requesting a resource, and after as well, the "+NNNms" will show you how much time was spent between calls.
-
-
-
-When stdout is not a TTY, `Date#toISOString()` is used, making it more useful for logging the debug information as shown below:
-
-
-
-
-## Conventions
-
-If you're using this in one or more of your libraries, you _should_ use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you _should_ prefix them with your library name and use ":" to separate features. For example "bodyParser" from Connect would then be "connect:bodyParser". If you append a "*" to the end of your name, it will always be enabled regardless of the setting of the DEBUG environment variable. You can then use it for normal output as well as debug output.
-
-## Wildcards
-
-The `*` character may be used as a wildcard. Suppose for example your library has
-debuggers named "connect:bodyParser", "connect:compress", "connect:session",
-instead of listing all three with
-`DEBUG=connect:bodyParser,connect:compress,connect:session`, you may simply do
-`DEBUG=connect:*`, or to run everything using this module simply use `DEBUG=*`.
-
-You can also exclude specific debuggers by prefixing them with a "-" character.
-For example, `DEBUG=*,-connect:*` would include all debuggers except those
-starting with "connect:".
-
-## Environment Variables
-
-When running through Node.js, you can set a few environment variables that will
-change the behavior of the debug logging:
-
-| Name | Purpose |
-|-----------|-------------------------------------------------|
-| `DEBUG` | Enables/disables specific debugging namespaces. |
-| `DEBUG_HIDE_DATE` | Hide date from debug output (non-TTY). |
-| `DEBUG_COLORS`| Whether or not to use colors in the debug output. |
-| `DEBUG_DEPTH` | Object inspection depth. |
-| `DEBUG_SHOW_HIDDEN` | Shows hidden properties on inspected objects. |
-
-
-__Note:__ The environment variables beginning with `DEBUG_` end up being
-converted into an Options object that gets used with `%o`/`%O` formatters.
-See the Node.js documentation for
-[`util.inspect()`](https://nodejs.org/api/util.html#util_util_inspect_object_options)
-for the complete list.
-
-## Formatters
-
-Debug uses [printf-style](https://wikipedia.org/wiki/Printf_format_string) formatting.
-Below are the officially supported formatters:
-
-| Formatter | Representation |
-|-----------|----------------|
-| `%O` | Pretty-print an Object on multiple lines. |
-| `%o` | Pretty-print an Object all on a single line. |
-| `%s` | String. |
-| `%d` | Number (both integer and float). |
-| `%j` | JSON. Replaced with the string '[Circular]' if the argument contains circular references. |
-| `%%` | Single percent sign ('%'). This does not consume an argument. |
-
-
-### Custom formatters
-
-You can add custom formatters by extending the `debug.formatters` object.
-For example, if you wanted to add support for rendering a Buffer as hex with
-`%h`, you could do something like:
-
-```js
-const createDebug = require('debug')
-createDebug.formatters.h = (v) => {
- return v.toString('hex')
-}
-
-// …elsewhere
-const debug = createDebug('foo')
-debug('this is hex: %h', new Buffer('hello world'))
-// foo this is hex: 68656c6c6f20776f726c6421 +0ms
-```
-
-
-## Browser Support
-
-You can build a browser-ready script using [browserify](https://github.com/substack/node-browserify),
-or just use the [browserify-as-a-service](https://wzrd.in/) [build](https://wzrd.in/standalone/debug@latest),
-if you don't want to build it yourself.
-
-Debug's enable state is currently persisted by `localStorage`.
-Consider the situation shown below where you have `worker:a` and `worker:b`,
-and wish to debug both. You can enable this using `localStorage.debug`:
-
-```js
-localStorage.debug = 'worker:*'
-```
-
-And then refresh the page.
-
-```js
-a = debug('worker:a');
-b = debug('worker:b');
-
-setInterval(function(){
- a('doing some work');
-}, 1000);
-
-setInterval(function(){
- b('doing some work');
-}, 1200);
-```
-
-In Chromium-based web browsers (e.g. Brave, Chrome, and Electron), the JavaScript console will—by default—only show messages logged by `debug` if the "Verbose" log level is _enabled_.
-
-
-
-## Output streams
-
- By default `debug` will log to stderr, however this can be configured per-namespace by overriding the `log` method:
-
-Example [_stdout.js_](./examples/node/stdout.js):
-
-```js
-var debug = require('debug');
-var error = debug('app:error');
-
-// by default stderr is used
-error('goes to stderr!');
-
-var log = debug('app:log');
-// set this namespace to log via console.log
-log.log = console.log.bind(console); // don't forget to bind to console!
-log('goes to stdout');
-error('still goes to stderr!');
-
-// set all output to go via console.info
-// overrides all per-namespace log settings
-debug.log = console.info.bind(console);
-error('now goes to stdout via console.info');
-log('still goes to stdout, but via console.info now');
-```
-
-## Extend
-You can simply extend debugger
-```js
-const log = require('debug')('auth');
-
-//creates new debug instance with extended namespace
-const logSign = log.extend('sign');
-const logLogin = log.extend('login');
-
-log('hello'); // auth hello
-logSign('hello'); //auth:sign hello
-logLogin('hello'); //auth:login hello
-```
-
-## Set dynamically
-
-You can also enable debug dynamically by calling the `enable()` method :
-
-```js
-let debug = require('debug');
-
-console.log(1, debug.enabled('test'));
-
-debug.enable('test');
-console.log(2, debug.enabled('test'));
-
-debug.disable();
-console.log(3, debug.enabled('test'));
-
-```
-
-print :
-```
-1 false
-2 true
-3 false
-```
-
-Usage :
-`enable(namespaces)`
-`namespaces` can include modes separated by a colon and wildcards.
-
-Note that calling `enable()` completely overrides previously set DEBUG variable :
-
-```
-$ DEBUG=foo node -e 'var dbg = require("debug"); dbg.enable("bar"); console.log(dbg.enabled("foo"))'
-=> false
-```
-
-`disable()`
-
-Will disable all namespaces. The functions returns the namespaces currently
-enabled (and skipped). This can be useful if you want to disable debugging
-temporarily without knowing what was enabled to begin with.
-
-For example:
-
-```js
-let debug = require('debug');
-debug.enable('foo:*,-foo:bar');
-let namespaces = debug.disable();
-debug.enable(namespaces);
-```
-
-Note: There is no guarantee that the string will be identical to the initial
-enable string, but semantically they will be identical.
-
-## Checking whether a debug target is enabled
-
-After you've created a debug instance, you can determine whether or not it is
-enabled by checking the `enabled` property:
-
-```javascript
-const debug = require('debug')('http');
-
-if (debug.enabled) {
- // do stuff...
-}
-```
-
-You can also manually toggle this property to force the debug instance to be
-enabled or disabled.
-
-## Usage in child processes
-
-Due to the way `debug` detects if the output is a TTY or not, colors are not shown in child processes when `stderr` is piped. A solution is to pass the `DEBUG_COLORS=1` environment variable to the child process.
-For example:
-
-```javascript
-worker = fork(WORKER_WRAP_PATH, [workerPath], {
- stdio: [
- /* stdin: */ 0,
- /* stdout: */ 'pipe',
- /* stderr: */ 'pipe',
- 'ipc',
- ],
- env: Object.assign({}, process.env, {
- DEBUG_COLORS: 1 // without this settings, colors won't be shown
- }),
-});
-
-worker.stderr.pipe(process.stderr, { end: false });
-```
-
-
-## Authors
-
- - TJ Holowaychuk
- - Nathan Rajlich
- - Andrew Rhyne
- - Josh Junon
-
-## Backers
-
-Support us with a monthly donation and help us continue our activities. [[Become a backer](https://opencollective.com/debug#backer)]
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-## Sponsors
-
-Become a sponsor and get your logo on our README on Github with a link to your site. [[Become a sponsor](https://opencollective.com/debug#sponsor)]
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-## License
-
-(The MIT License)
-
-Copyright (c) 2014-2017 TJ Holowaychuk <tj@vision-media.ca>
-Copyright (c) 2018-2021 Josh Junon
-
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-'Software'), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/spaces/zhang-wei-jian/docker/node_modules/minimatch/minimatch.js b/spaces/zhang-wei-jian/docker/node_modules/minimatch/minimatch.js
deleted file mode 100644
index fda45ade7cfc351fbcd76877d50b4b5f643c37a3..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/minimatch/minimatch.js
+++ /dev/null
@@ -1,947 +0,0 @@
-module.exports = minimatch
-minimatch.Minimatch = Minimatch
-
-var path = (function () { try { return require('path') } catch (e) {}}()) || {
- sep: '/'
-}
-minimatch.sep = path.sep
-
-var GLOBSTAR = minimatch.GLOBSTAR = Minimatch.GLOBSTAR = {}
-var expand = require('brace-expansion')
-
-var plTypes = {
- '!': { open: '(?:(?!(?:', close: '))[^/]*?)'},
- '?': { open: '(?:', close: ')?' },
- '+': { open: '(?:', close: ')+' },
- '*': { open: '(?:', close: ')*' },
- '@': { open: '(?:', close: ')' }
-}
-
-// any single thing other than /
-// don't need to escape / when using new RegExp()
-var qmark = '[^/]'
-
-// * => any number of characters
-var star = qmark + '*?'
-
-// ** when dots are allowed. Anything goes, except .. and .
-// not (^ or / followed by one or two dots followed by $ or /),
-// followed by anything, any number of times.
-var twoStarDot = '(?:(?!(?:\\\/|^)(?:\\.{1,2})($|\\\/)).)*?'
-
-// not a ^ or / followed by a dot,
-// followed by anything, any number of times.
-var twoStarNoDot = '(?:(?!(?:\\\/|^)\\.).)*?'
-
-// characters that need to be escaped in RegExp.
-var reSpecials = charSet('().*{}+?[]^$\\!')
-
-// "abc" -> { a:true, b:true, c:true }
-function charSet (s) {
- return s.split('').reduce(function (set, c) {
- set[c] = true
- return set
- }, {})
-}
-
-// normalizes slashes.
-var slashSplit = /\/+/
-
-minimatch.filter = filter
-function filter (pattern, options) {
- options = options || {}
- return function (p, i, list) {
- return minimatch(p, pattern, options)
- }
-}
-
-function ext (a, b) {
- b = b || {}
- var t = {}
- Object.keys(a).forEach(function (k) {
- t[k] = a[k]
- })
- Object.keys(b).forEach(function (k) {
- t[k] = b[k]
- })
- return t
-}
-
-minimatch.defaults = function (def) {
- if (!def || typeof def !== 'object' || !Object.keys(def).length) {
- return minimatch
- }
-
- var orig = minimatch
-
- var m = function minimatch (p, pattern, options) {
- return orig(p, pattern, ext(def, options))
- }
-
- m.Minimatch = function Minimatch (pattern, options) {
- return new orig.Minimatch(pattern, ext(def, options))
- }
- m.Minimatch.defaults = function defaults (options) {
- return orig.defaults(ext(def, options)).Minimatch
- }
-
- m.filter = function filter (pattern, options) {
- return orig.filter(pattern, ext(def, options))
- }
-
- m.defaults = function defaults (options) {
- return orig.defaults(ext(def, options))
- }
-
- m.makeRe = function makeRe (pattern, options) {
- return orig.makeRe(pattern, ext(def, options))
- }
-
- m.braceExpand = function braceExpand (pattern, options) {
- return orig.braceExpand(pattern, ext(def, options))
- }
-
- m.match = function (list, pattern, options) {
- return orig.match(list, pattern, ext(def, options))
- }
-
- return m
-}
-
-Minimatch.defaults = function (def) {
- return minimatch.defaults(def).Minimatch
-}
-
-function minimatch (p, pattern, options) {
- assertValidPattern(pattern)
-
- if (!options) options = {}
-
- // shortcut: comments match nothing.
- if (!options.nocomment && pattern.charAt(0) === '#') {
- return false
- }
-
- return new Minimatch(pattern, options).match(p)
-}
-
-function Minimatch (pattern, options) {
- if (!(this instanceof Minimatch)) {
- return new Minimatch(pattern, options)
- }
-
- assertValidPattern(pattern)
-
- if (!options) options = {}
-
- pattern = pattern.trim()
-
- // windows support: need to use /, not \
- if (!options.allowWindowsEscape && path.sep !== '/') {
- pattern = pattern.split(path.sep).join('/')
- }
-
- this.options = options
- this.set = []
- this.pattern = pattern
- this.regexp = null
- this.negate = false
- this.comment = false
- this.empty = false
- this.partial = !!options.partial
-
- // make the set of regexps etc.
- this.make()
-}
-
-Minimatch.prototype.debug = function () {}
-
-Minimatch.prototype.make = make
-function make () {
- var pattern = this.pattern
- var options = this.options
-
- // empty patterns and comments match nothing.
- if (!options.nocomment && pattern.charAt(0) === '#') {
- this.comment = true
- return
- }
- if (!pattern) {
- this.empty = true
- return
- }
-
- // step 1: figure out negation, etc.
- this.parseNegate()
-
- // step 2: expand braces
- var set = this.globSet = this.braceExpand()
-
- if (options.debug) this.debug = function debug() { console.error.apply(console, arguments) }
-
- this.debug(this.pattern, set)
-
- // step 3: now we have a set, so turn each one into a series of path-portion
- // matching patterns.
- // These will be regexps, except in the case of "**", which is
- // set to the GLOBSTAR object for globstar behavior,
- // and will not contain any / characters
- set = this.globParts = set.map(function (s) {
- return s.split(slashSplit)
- })
-
- this.debug(this.pattern, set)
-
- // glob --> regexps
- set = set.map(function (s, si, set) {
- return s.map(this.parse, this)
- }, this)
-
- this.debug(this.pattern, set)
-
- // filter out everything that didn't compile properly.
- set = set.filter(function (s) {
- return s.indexOf(false) === -1
- })
-
- this.debug(this.pattern, set)
-
- this.set = set
-}
-
-Minimatch.prototype.parseNegate = parseNegate
-function parseNegate () {
- var pattern = this.pattern
- var negate = false
- var options = this.options
- var negateOffset = 0
-
- if (options.nonegate) return
-
- for (var i = 0, l = pattern.length
- ; i < l && pattern.charAt(i) === '!'
- ; i++) {
- negate = !negate
- negateOffset++
- }
-
- if (negateOffset) this.pattern = pattern.substr(negateOffset)
- this.negate = negate
-}
-
-// Brace expansion:
-// a{b,c}d -> abd acd
-// a{b,}c -> abc ac
-// a{0..3}d -> a0d a1d a2d a3d
-// a{b,c{d,e}f}g -> abg acdfg acefg
-// a{b,c}d{e,f}g -> abdeg acdeg abdeg abdfg
-//
-// Invalid sets are not expanded.
-// a{2..}b -> a{2..}b
-// a{b}c -> a{b}c
-minimatch.braceExpand = function (pattern, options) {
- return braceExpand(pattern, options)
-}
-
-Minimatch.prototype.braceExpand = braceExpand
-
-function braceExpand (pattern, options) {
- if (!options) {
- if (this instanceof Minimatch) {
- options = this.options
- } else {
- options = {}
- }
- }
-
- pattern = typeof pattern === 'undefined'
- ? this.pattern : pattern
-
- assertValidPattern(pattern)
-
- // Thanks to Yeting Li for
- // improving this regexp to avoid a ReDOS vulnerability.
- if (options.nobrace || !/\{(?:(?!\{).)*\}/.test(pattern)) {
- // shortcut. no need to expand.
- return [pattern]
- }
-
- return expand(pattern)
-}
-
-var MAX_PATTERN_LENGTH = 1024 * 64
-var assertValidPattern = function (pattern) {
- if (typeof pattern !== 'string') {
- throw new TypeError('invalid pattern')
- }
-
- if (pattern.length > MAX_PATTERN_LENGTH) {
- throw new TypeError('pattern is too long')
- }
-}
-
-// parse a component of the expanded set.
-// At this point, no pattern may contain "/" in it
-// so we're going to return a 2d array, where each entry is the full
-// pattern, split on '/', and then turned into a regular expression.
-// A regexp is made at the end which joins each array with an
-// escaped /, and another full one which joins each regexp with |.
-//
-// Following the lead of Bash 4.1, note that "**" only has special meaning
-// when it is the *only* thing in a path portion. Otherwise, any series
-// of * is equivalent to a single *. Globstar behavior is enabled by
-// default, and can be disabled by setting options.noglobstar.
-Minimatch.prototype.parse = parse
-var SUBPARSE = {}
-function parse (pattern, isSub) {
- assertValidPattern(pattern)
-
- var options = this.options
-
- // shortcuts
- if (pattern === '**') {
- if (!options.noglobstar)
- return GLOBSTAR
- else
- pattern = '*'
- }
- if (pattern === '') return ''
-
- var re = ''
- var hasMagic = !!options.nocase
- var escaping = false
- // ? => one single character
- var patternListStack = []
- var negativeLists = []
- var stateChar
- var inClass = false
- var reClassStart = -1
- var classStart = -1
- // . and .. never match anything that doesn't start with .,
- // even when options.dot is set.
- var patternStart = pattern.charAt(0) === '.' ? '' // anything
- // not (start or / followed by . or .. followed by / or end)
- : options.dot ? '(?!(?:^|\\\/)\\.{1,2}(?:$|\\\/))'
- : '(?!\\.)'
- var self = this
-
- function clearStateChar () {
- if (stateChar) {
- // we had some state-tracking character
- // that wasn't consumed by this pass.
- switch (stateChar) {
- case '*':
- re += star
- hasMagic = true
- break
- case '?':
- re += qmark
- hasMagic = true
- break
- default:
- re += '\\' + stateChar
- break
- }
- self.debug('clearStateChar %j %j', stateChar, re)
- stateChar = false
- }
- }
-
- for (var i = 0, len = pattern.length, c
- ; (i < len) && (c = pattern.charAt(i))
- ; i++) {
- this.debug('%s\t%s %s %j', pattern, i, re, c)
-
- // skip over any that are escaped.
- if (escaping && reSpecials[c]) {
- re += '\\' + c
- escaping = false
- continue
- }
-
- switch (c) {
- /* istanbul ignore next */
- case '/': {
- // completely not allowed, even escaped.
- // Should already be path-split by now.
- return false
- }
-
- case '\\':
- clearStateChar()
- escaping = true
- continue
-
- // the various stateChar values
- // for the "extglob" stuff.
- case '?':
- case '*':
- case '+':
- case '@':
- case '!':
- this.debug('%s\t%s %s %j <-- stateChar', pattern, i, re, c)
-
- // all of those are literals inside a class, except that
- // the glob [!a] means [^a] in regexp
- if (inClass) {
- this.debug(' in class')
- if (c === '!' && i === classStart + 1) c = '^'
- re += c
- continue
- }
-
- // if we already have a stateChar, then it means
- // that there was something like ** or +? in there.
- // Handle the stateChar, then proceed with this one.
- self.debug('call clearStateChar %j', stateChar)
- clearStateChar()
- stateChar = c
- // if extglob is disabled, then +(asdf|foo) isn't a thing.
- // just clear the statechar *now*, rather than even diving into
- // the patternList stuff.
- if (options.noext) clearStateChar()
- continue
-
- case '(':
- if (inClass) {
- re += '('
- continue
- }
-
- if (!stateChar) {
- re += '\\('
- continue
- }
-
- patternListStack.push({
- type: stateChar,
- start: i - 1,
- reStart: re.length,
- open: plTypes[stateChar].open,
- close: plTypes[stateChar].close
- })
- // negation is (?:(?!js)[^/]*)
- re += stateChar === '!' ? '(?:(?!(?:' : '(?:'
- this.debug('plType %j %j', stateChar, re)
- stateChar = false
- continue
-
- case ')':
- if (inClass || !patternListStack.length) {
- re += '\\)'
- continue
- }
-
- clearStateChar()
- hasMagic = true
- var pl = patternListStack.pop()
- // negation is (?:(?!js)[^/]*)
- // The others are (?:)
- re += pl.close
- if (pl.type === '!') {
- negativeLists.push(pl)
- }
- pl.reEnd = re.length
- continue
-
- case '|':
- if (inClass || !patternListStack.length || escaping) {
- re += '\\|'
- escaping = false
- continue
- }
-
- clearStateChar()
- re += '|'
- continue
-
- // these are mostly the same in regexp and glob
- case '[':
- // swallow any state-tracking char before the [
- clearStateChar()
-
- if (inClass) {
- re += '\\' + c
- continue
- }
-
- inClass = true
- classStart = i
- reClassStart = re.length
- re += c
- continue
-
- case ']':
- // a right bracket shall lose its special
- // meaning and represent itself in
- // a bracket expression if it occurs
- // first in the list. -- POSIX.2 2.8.3.2
- if (i === classStart + 1 || !inClass) {
- re += '\\' + c
- escaping = false
- continue
- }
-
- // handle the case where we left a class open.
- // "[z-a]" is valid, equivalent to "\[z-a\]"
- // split where the last [ was, make sure we don't have
- // an invalid re. if so, re-walk the contents of the
- // would-be class to re-translate any characters that
- // were passed through as-is
- // TODO: It would probably be faster to determine this
- // without a try/catch and a new RegExp, but it's tricky
- // to do safely. For now, this is safe and works.
- var cs = pattern.substring(classStart + 1, i)
- try {
- RegExp('[' + cs + ']')
- } catch (er) {
- // not a valid class!
- var sp = this.parse(cs, SUBPARSE)
- re = re.substr(0, reClassStart) + '\\[' + sp[0] + '\\]'
- hasMagic = hasMagic || sp[1]
- inClass = false
- continue
- }
-
- // finish up the class.
- hasMagic = true
- inClass = false
- re += c
- continue
-
- default:
- // swallow any state char that wasn't consumed
- clearStateChar()
-
- if (escaping) {
- // no need
- escaping = false
- } else if (reSpecials[c]
- && !(c === '^' && inClass)) {
- re += '\\'
- }
-
- re += c
-
- } // switch
- } // for
-
- // handle the case where we left a class open.
- // "[abc" is valid, equivalent to "\[abc"
- if (inClass) {
- // split where the last [ was, and escape it
- // this is a huge pita. We now have to re-walk
- // the contents of the would-be class to re-translate
- // any characters that were passed through as-is
- cs = pattern.substr(classStart + 1)
- sp = this.parse(cs, SUBPARSE)
- re = re.substr(0, reClassStart) + '\\[' + sp[0]
- hasMagic = hasMagic || sp[1]
- }
-
- // handle the case where we had a +( thing at the *end*
- // of the pattern.
- // each pattern list stack adds 3 chars, and we need to go through
- // and escape any | chars that were passed through as-is for the regexp.
- // Go through and escape them, taking care not to double-escape any
- // | chars that were already escaped.
- for (pl = patternListStack.pop(); pl; pl = patternListStack.pop()) {
- var tail = re.slice(pl.reStart + pl.open.length)
- this.debug('setting tail', re, pl)
- // maybe some even number of \, then maybe 1 \, followed by a |
- tail = tail.replace(/((?:\\{2}){0,64})(\\?)\|/g, function (_, $1, $2) {
- if (!$2) {
- // the | isn't already escaped, so escape it.
- $2 = '\\'
- }
-
- // need to escape all those slashes *again*, without escaping the
- // one that we need for escaping the | character. As it works out,
- // escaping an even number of slashes can be done by simply repeating
- // it exactly after itself. That's why this trick works.
- //
- // I am sorry that you have to see this.
- return $1 + $1 + $2 + '|'
- })
-
- this.debug('tail=%j\n %s', tail, tail, pl, re)
- var t = pl.type === '*' ? star
- : pl.type === '?' ? qmark
- : '\\' + pl.type
-
- hasMagic = true
- re = re.slice(0, pl.reStart) + t + '\\(' + tail
- }
-
- // handle trailing things that only matter at the very end.
- clearStateChar()
- if (escaping) {
- // trailing \\
- re += '\\\\'
- }
-
- // only need to apply the nodot start if the re starts with
- // something that could conceivably capture a dot
- var addPatternStart = false
- switch (re.charAt(0)) {
- case '[': case '.': case '(': addPatternStart = true
- }
-
- // Hack to work around lack of negative lookbehind in JS
- // A pattern like: *.!(x).!(y|z) needs to ensure that a name
- // like 'a.xyz.yz' doesn't match. So, the first negative
- // lookahead, has to look ALL the way ahead, to the end of
- // the pattern.
- for (var n = negativeLists.length - 1; n > -1; n--) {
- var nl = negativeLists[n]
-
- var nlBefore = re.slice(0, nl.reStart)
- var nlFirst = re.slice(nl.reStart, nl.reEnd - 8)
- var nlLast = re.slice(nl.reEnd - 8, nl.reEnd)
- var nlAfter = re.slice(nl.reEnd)
-
- nlLast += nlAfter
-
- // Handle nested stuff like *(*.js|!(*.json)), where open parens
- // mean that we should *not* include the ) in the bit that is considered
- // "after" the negated section.
- var openParensBefore = nlBefore.split('(').length - 1
- var cleanAfter = nlAfter
- for (i = 0; i < openParensBefore; i++) {
- cleanAfter = cleanAfter.replace(/\)[+*?]?/, '')
- }
- nlAfter = cleanAfter
-
- var dollar = ''
- if (nlAfter === '' && isSub !== SUBPARSE) {
- dollar = '$'
- }
- var newRe = nlBefore + nlFirst + nlAfter + dollar + nlLast
- re = newRe
- }
-
- // if the re is not "" at this point, then we need to make sure
- // it doesn't match against an empty path part.
- // Otherwise a/* will match a/, which it should not.
- if (re !== '' && hasMagic) {
- re = '(?=.)' + re
- }
-
- if (addPatternStart) {
- re = patternStart + re
- }
-
- // parsing just a piece of a larger pattern.
- if (isSub === SUBPARSE) {
- return [re, hasMagic]
- }
-
- // skip the regexp for non-magical patterns
- // unescape anything in it, though, so that it'll be
- // an exact match against a file etc.
- if (!hasMagic) {
- return globUnescape(pattern)
- }
-
- var flags = options.nocase ? 'i' : ''
- try {
- var regExp = new RegExp('^' + re + '$', flags)
- } catch (er) /* istanbul ignore next - should be impossible */ {
- // If it was an invalid regular expression, then it can't match
- // anything. This trick looks for a character after the end of
- // the string, which is of course impossible, except in multi-line
- // mode, but it's not a /m regex.
- return new RegExp('$.')
- }
-
- regExp._glob = pattern
- regExp._src = re
-
- return regExp
-}
-
-minimatch.makeRe = function (pattern, options) {
- return new Minimatch(pattern, options || {}).makeRe()
-}
-
-Minimatch.prototype.makeRe = makeRe
-function makeRe () {
- if (this.regexp || this.regexp === false) return this.regexp
-
- // at this point, this.set is a 2d array of partial
- // pattern strings, or "**".
- //
- // It's better to use .match(). This function shouldn't
- // be used, really, but it's pretty convenient sometimes,
- // when you just want to work with a regex.
- var set = this.set
-
- if (!set.length) {
- this.regexp = false
- return this.regexp
- }
- var options = this.options
-
- var twoStar = options.noglobstar ? star
- : options.dot ? twoStarDot
- : twoStarNoDot
- var flags = options.nocase ? 'i' : ''
-
- var re = set.map(function (pattern) {
- return pattern.map(function (p) {
- return (p === GLOBSTAR) ? twoStar
- : (typeof p === 'string') ? regExpEscape(p)
- : p._src
- }).join('\\\/')
- }).join('|')
-
- // must match entire pattern
- // ending in a * or ** will make it less strict.
- re = '^(?:' + re + ')$'
-
- // can match anything, as long as it's not this.
- if (this.negate) re = '^(?!' + re + ').*$'
-
- try {
- this.regexp = new RegExp(re, flags)
- } catch (ex) /* istanbul ignore next - should be impossible */ {
- this.regexp = false
- }
- return this.regexp
-}
-
-minimatch.match = function (list, pattern, options) {
- options = options || {}
- var mm = new Minimatch(pattern, options)
- list = list.filter(function (f) {
- return mm.match(f)
- })
- if (mm.options.nonull && !list.length) {
- list.push(pattern)
- }
- return list
-}
-
-Minimatch.prototype.match = function match (f, partial) {
- if (typeof partial === 'undefined') partial = this.partial
- this.debug('match', f, this.pattern)
- // short-circuit in the case of busted things.
- // comments, etc.
- if (this.comment) return false
- if (this.empty) return f === ''
-
- if (f === '/' && partial) return true
-
- var options = this.options
-
- // windows: need to use /, not \
- if (path.sep !== '/') {
- f = f.split(path.sep).join('/')
- }
-
- // treat the test path as a set of pathparts.
- f = f.split(slashSplit)
- this.debug(this.pattern, 'split', f)
-
- // just ONE of the pattern sets in this.set needs to match
- // in order for it to be valid. If negating, then just one
- // match means that we have failed.
- // Either way, return on the first hit.
-
- var set = this.set
- this.debug(this.pattern, 'set', set)
-
- // Find the basename of the path by looking for the last non-empty segment
- var filename
- var i
- for (i = f.length - 1; i >= 0; i--) {
- filename = f[i]
- if (filename) break
- }
-
- for (i = 0; i < set.length; i++) {
- var pattern = set[i]
- var file = f
- if (options.matchBase && pattern.length === 1) {
- file = [filename]
- }
- var hit = this.matchOne(file, pattern, partial)
- if (hit) {
- if (options.flipNegate) return true
- return !this.negate
- }
- }
-
- // didn't get any hits. this is success if it's a negative
- // pattern, failure otherwise.
- if (options.flipNegate) return false
- return this.negate
-}
-
-// set partial to true to test if, for example,
-// "/a/b" matches the start of "/*/b/*/d"
-// Partial means, if you run out of file before you run
-// out of pattern, then that's fine, as long as all
-// the parts match.
-Minimatch.prototype.matchOne = function (file, pattern, partial) {
- var options = this.options
-
- this.debug('matchOne',
- { 'this': this, file: file, pattern: pattern })
-
- this.debug('matchOne', file.length, pattern.length)
-
- for (var fi = 0,
- pi = 0,
- fl = file.length,
- pl = pattern.length
- ; (fi < fl) && (pi < pl)
- ; fi++, pi++) {
- this.debug('matchOne loop')
- var p = pattern[pi]
- var f = file[fi]
-
- this.debug(pattern, p, f)
-
- // should be impossible.
- // some invalid regexp stuff in the set.
- /* istanbul ignore if */
- if (p === false) return false
-
- if (p === GLOBSTAR) {
- this.debug('GLOBSTAR', [pattern, p, f])
-
- // "**"
- // a/**/b/**/c would match the following:
- // a/b/x/y/z/c
- // a/x/y/z/b/c
- // a/b/x/b/x/c
- // a/b/c
- // To do this, take the rest of the pattern after
- // the **, and see if it would match the file remainder.
- // If so, return success.
- // If not, the ** "swallows" a segment, and try again.
- // This is recursively awful.
- //
- // a/**/b/**/c matching a/b/x/y/z/c
- // - a matches a
- // - doublestar
- // - matchOne(b/x/y/z/c, b/**/c)
- // - b matches b
- // - doublestar
- // - matchOne(x/y/z/c, c) -> no
- // - matchOne(y/z/c, c) -> no
- // - matchOne(z/c, c) -> no
- // - matchOne(c, c) yes, hit
- var fr = fi
- var pr = pi + 1
- if (pr === pl) {
- this.debug('** at the end')
- // a ** at the end will just swallow the rest.
- // We have found a match.
- // however, it will not swallow /.x, unless
- // options.dot is set.
- // . and .. are *never* matched by **, for explosively
- // exponential reasons.
- for (; fi < fl; fi++) {
- if (file[fi] === '.' || file[fi] === '..' ||
- (!options.dot && file[fi].charAt(0) === '.')) return false
- }
- return true
- }
-
- // ok, let's see if we can swallow whatever we can.
- while (fr < fl) {
- var swallowee = file[fr]
-
- this.debug('\nglobstar while', file, fr, pattern, pr, swallowee)
-
- // XXX remove this slice. Just pass the start index.
- if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) {
- this.debug('globstar found match!', fr, fl, swallowee)
- // found a match.
- return true
- } else {
- // can't swallow "." or ".." ever.
- // can only swallow ".foo" when explicitly asked.
- if (swallowee === '.' || swallowee === '..' ||
- (!options.dot && swallowee.charAt(0) === '.')) {
- this.debug('dot detected!', file, fr, pattern, pr)
- break
- }
-
- // ** swallows a segment, and continue.
- this.debug('globstar swallow a segment, and continue')
- fr++
- }
- }
-
- // no match was found.
- // However, in partial mode, we can't say this is necessarily over.
- // If there's more *pattern* left, then
- /* istanbul ignore if */
- if (partial) {
- // ran out of file
- this.debug('\n>>> no match, partial?', file, fr, pattern, pr)
- if (fr === fl) return true
- }
- return false
- }
-
- // something other than **
- // non-magic patterns just have to match exactly
- // patterns with magic have been turned into regexps.
- var hit
- if (typeof p === 'string') {
- hit = f === p
- this.debug('string match', p, f, hit)
- } else {
- hit = f.match(p)
- this.debug('pattern match', p, f, hit)
- }
-
- if (!hit) return false
- }
-
- // Note: ending in / means that we'll get a final ""
- // at the end of the pattern. This can only match a
- // corresponding "" at the end of the file.
- // If the file ends in /, then it can only match a
- // a pattern that ends in /, unless the pattern just
- // doesn't have any more for it. But, a/b/ should *not*
- // match "a/b/*", even though "" matches against the
- // [^/]*? pattern, except in partial mode, where it might
- // simply not be reached yet.
- // However, a/b/ should still satisfy a/*
-
- // now either we fell off the end of the pattern, or we're done.
- if (fi === fl && pi === pl) {
- // ran out of pattern and filename at the same time.
- // an exact hit!
- return true
- } else if (fi === fl) {
- // ran out of file, but still had pattern left.
- // this is ok if we're doing the match as part of
- // a glob fs traversal.
- return partial
- } else /* istanbul ignore else */ if (pi === pl) {
- // ran out of pattern, still have file left.
- // this is only acceptable if we're on the very last
- // empty segment of a file with a trailing slash.
- // a/* should match a/b/
- return (fi === fl - 1) && (file[fi] === '')
- }
-
- // should be unreachable.
- /* istanbul ignore next */
- throw new Error('wtf?')
-}
-
-// replace stuff like \* with *
-function globUnescape (s) {
- return s.replace(/\\(.)/g, '$1')
-}
-
-function regExpEscape (s) {
- return s.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, '\\$&')
-}
diff --git a/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/picomatch.js b/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/picomatch.js
deleted file mode 100644
index 782d809435a75989f2f3d7801f45b4fab398cb36..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/picomatch.js
+++ /dev/null
@@ -1,342 +0,0 @@
-'use strict';
-
-const path = require('path');
-const scan = require('./scan');
-const parse = require('./parse');
-const utils = require('./utils');
-const constants = require('./constants');
-const isObject = val => val && typeof val === 'object' && !Array.isArray(val);
-
-/**
- * Creates a matcher function from one or more glob patterns. The
- * returned function takes a string to match as its first argument,
- * and returns true if the string is a match. The returned matcher
- * function also takes a boolean as the second argument that, when true,
- * returns an object with additional information.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch(glob[, options]);
- *
- * const isMatch = picomatch('*.!(*a)');
- * console.log(isMatch('a.a')); //=> false
- * console.log(isMatch('a.b')); //=> true
- * ```
- * @name picomatch
- * @param {String|Array} `globs` One or more glob patterns.
- * @param {Object=} `options`
- * @return {Function=} Returns a matcher function.
- * @api public
- */
-
-const picomatch = (glob, options, returnState = false) => {
- if (Array.isArray(glob)) {
- const fns = glob.map(input => picomatch(input, options, returnState));
- const arrayMatcher = str => {
- for (const isMatch of fns) {
- const state = isMatch(str);
- if (state) return state;
- }
- return false;
- };
- return arrayMatcher;
- }
-
- const isState = isObject(glob) && glob.tokens && glob.input;
-
- if (glob === '' || (typeof glob !== 'string' && !isState)) {
- throw new TypeError('Expected pattern to be a non-empty string');
- }
-
- const opts = options || {};
- const posix = utils.isWindows(options);
- const regex = isState
- ? picomatch.compileRe(glob, options)
- : picomatch.makeRe(glob, options, false, true);
-
- const state = regex.state;
- delete regex.state;
-
- let isIgnored = () => false;
- if (opts.ignore) {
- const ignoreOpts = { ...options, ignore: null, onMatch: null, onResult: null };
- isIgnored = picomatch(opts.ignore, ignoreOpts, returnState);
- }
-
- const matcher = (input, returnObject = false) => {
- const { isMatch, match, output } = picomatch.test(input, regex, options, { glob, posix });
- const result = { glob, state, regex, posix, input, output, match, isMatch };
-
- if (typeof opts.onResult === 'function') {
- opts.onResult(result);
- }
-
- if (isMatch === false) {
- result.isMatch = false;
- return returnObject ? result : false;
- }
-
- if (isIgnored(input)) {
- if (typeof opts.onIgnore === 'function') {
- opts.onIgnore(result);
- }
- result.isMatch = false;
- return returnObject ? result : false;
- }
-
- if (typeof opts.onMatch === 'function') {
- opts.onMatch(result);
- }
- return returnObject ? result : true;
- };
-
- if (returnState) {
- matcher.state = state;
- }
-
- return matcher;
-};
-
-/**
- * Test `input` with the given `regex`. This is used by the main
- * `picomatch()` function to test the input string.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch.test(input, regex[, options]);
- *
- * console.log(picomatch.test('foo/bar', /^(?:([^/]*?)\/([^/]*?))$/));
- * // { isMatch: true, match: [ 'foo/', 'foo', 'bar' ], output: 'foo/bar' }
- * ```
- * @param {String} `input` String to test.
- * @param {RegExp} `regex`
- * @return {Object} Returns an object with matching info.
- * @api public
- */
-
-picomatch.test = (input, regex, options, { glob, posix } = {}) => {
- if (typeof input !== 'string') {
- throw new TypeError('Expected input to be a string');
- }
-
- if (input === '') {
- return { isMatch: false, output: '' };
- }
-
- const opts = options || {};
- const format = opts.format || (posix ? utils.toPosixSlashes : null);
- let match = input === glob;
- let output = (match && format) ? format(input) : input;
-
- if (match === false) {
- output = format ? format(input) : input;
- match = output === glob;
- }
-
- if (match === false || opts.capture === true) {
- if (opts.matchBase === true || opts.basename === true) {
- match = picomatch.matchBase(input, regex, options, posix);
- } else {
- match = regex.exec(output);
- }
- }
-
- return { isMatch: Boolean(match), match, output };
-};
-
-/**
- * Match the basename of a filepath.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch.matchBase(input, glob[, options]);
- * console.log(picomatch.matchBase('foo/bar.js', '*.js'); // true
- * ```
- * @param {String} `input` String to test.
- * @param {RegExp|String} `glob` Glob pattern or regex created by [.makeRe](#makeRe).
- * @return {Boolean}
- * @api public
- */
-
-picomatch.matchBase = (input, glob, options, posix = utils.isWindows(options)) => {
- const regex = glob instanceof RegExp ? glob : picomatch.makeRe(glob, options);
- return regex.test(path.basename(input));
-};
-
-/**
- * Returns true if **any** of the given glob `patterns` match the specified `string`.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch.isMatch(string, patterns[, options]);
- *
- * console.log(picomatch.isMatch('a.a', ['b.*', '*.a'])); //=> true
- * console.log(picomatch.isMatch('a.a', 'b.*')); //=> false
- * ```
- * @param {String|Array} str The string to test.
- * @param {String|Array} patterns One or more glob patterns to use for matching.
- * @param {Object} [options] See available [options](#options).
- * @return {Boolean} Returns true if any patterns match `str`
- * @api public
- */
-
-picomatch.isMatch = (str, patterns, options) => picomatch(patterns, options)(str);
-
-/**
- * Parse a glob pattern to create the source string for a regular
- * expression.
- *
- * ```js
- * const picomatch = require('picomatch');
- * const result = picomatch.parse(pattern[, options]);
- * ```
- * @param {String} `pattern`
- * @param {Object} `options`
- * @return {Object} Returns an object with useful properties and output to be used as a regex source string.
- * @api public
- */
-
-picomatch.parse = (pattern, options) => {
- if (Array.isArray(pattern)) return pattern.map(p => picomatch.parse(p, options));
- return parse(pattern, { ...options, fastpaths: false });
-};
-
-/**
- * Scan a glob pattern to separate the pattern into segments.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch.scan(input[, options]);
- *
- * const result = picomatch.scan('!./foo/*.js');
- * console.log(result);
- * { prefix: '!./',
- * input: '!./foo/*.js',
- * start: 3,
- * base: 'foo',
- * glob: '*.js',
- * isBrace: false,
- * isBracket: false,
- * isGlob: true,
- * isExtglob: false,
- * isGlobstar: false,
- * negated: true }
- * ```
- * @param {String} `input` Glob pattern to scan.
- * @param {Object} `options`
- * @return {Object} Returns an object with
- * @api public
- */
-
-picomatch.scan = (input, options) => scan(input, options);
-
-/**
- * Compile a regular expression from the `state` object returned by the
- * [parse()](#parse) method.
- *
- * @param {Object} `state`
- * @param {Object} `options`
- * @param {Boolean} `returnOutput` Intended for implementors, this argument allows you to return the raw output from the parser.
- * @param {Boolean} `returnState` Adds the state to a `state` property on the returned regex. Useful for implementors and debugging.
- * @return {RegExp}
- * @api public
- */
-
-picomatch.compileRe = (state, options, returnOutput = false, returnState = false) => {
- if (returnOutput === true) {
- return state.output;
- }
-
- const opts = options || {};
- const prepend = opts.contains ? '' : '^';
- const append = opts.contains ? '' : '$';
-
- let source = `${prepend}(?:${state.output})${append}`;
- if (state && state.negated === true) {
- source = `^(?!${source}).*$`;
- }
-
- const regex = picomatch.toRegex(source, options);
- if (returnState === true) {
- regex.state = state;
- }
-
- return regex;
-};
-
-/**
- * Create a regular expression from a parsed glob pattern.
- *
- * ```js
- * const picomatch = require('picomatch');
- * const state = picomatch.parse('*.js');
- * // picomatch.compileRe(state[, options]);
- *
- * console.log(picomatch.compileRe(state));
- * //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/
- * ```
- * @param {String} `state` The object returned from the `.parse` method.
- * @param {Object} `options`
- * @param {Boolean} `returnOutput` Implementors may use this argument to return the compiled output, instead of a regular expression. This is not exposed on the options to prevent end-users from mutating the result.
- * @param {Boolean} `returnState` Implementors may use this argument to return the state from the parsed glob with the returned regular expression.
- * @return {RegExp} Returns a regex created from the given pattern.
- * @api public
- */
-
-picomatch.makeRe = (input, options = {}, returnOutput = false, returnState = false) => {
- if (!input || typeof input !== 'string') {
- throw new TypeError('Expected a non-empty string');
- }
-
- let parsed = { negated: false, fastpaths: true };
-
- if (options.fastpaths !== false && (input[0] === '.' || input[0] === '*')) {
- parsed.output = parse.fastpaths(input, options);
- }
-
- if (!parsed.output) {
- parsed = parse(input, options);
- }
-
- return picomatch.compileRe(parsed, options, returnOutput, returnState);
-};
-
-/**
- * Create a regular expression from the given regex source string.
- *
- * ```js
- * const picomatch = require('picomatch');
- * // picomatch.toRegex(source[, options]);
- *
- * const { output } = picomatch.parse('*.js');
- * console.log(picomatch.toRegex(output));
- * //=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/
- * ```
- * @param {String} `source` Regular expression source string.
- * @param {Object} `options`
- * @return {RegExp}
- * @api public
- */
-
-picomatch.toRegex = (source, options) => {
- try {
- const opts = options || {};
- return new RegExp(source, opts.flags || (opts.nocase ? 'i' : ''));
- } catch (err) {
- if (options && options.debug === true) throw err;
- return /$^/;
- }
-};
-
-/**
- * Picomatch constants.
- * @return {Object}
- */
-
-picomatch.constants = constants;
-
-/**
- * Expose "picomatch"
- */
-
-module.exports = picomatch;