Buu | Majin | Vanishing Ball/Chocolate Beam | Fat Buu/Evil Bu the screen that lets you transform your character when your energy meter is full and you meet the conditions.
-A pause button on the top right corner of the screen that lets you pause the game and access the menu.
-
- The tips and tricks for Z Legends APK
-Z Legends APK is a game that requires skill and strategy to win. Here are some of the tips and tricks that can help you improve your performance:
-
-- Learn the strengths and weaknesses of each character and choose the one that suits your playstyle.
-- Practice your combos and special moves in the training mode and master them in the real battles.
-- Use your guard wisely and avoid spamming it. You can also dodge or counterattack when your opponent is vulnerable.
-- Use your power wisely and save it for the right moment. You can also use it to cancel your attacks or escape from combos.
-- Use your special moves wisely and aim them carefully. You can also use them to finish off your opponent or to create openings.
-- Use your transformation wisely and activate it when you have an advantage. You can also use it to change the tide of the battle or to surprise your opponent.
-- Play online mode and challenge other players to test your skills and learn from them.
-
- Why should you play Z Legends APK?
-Z Legends APK is a game that offers a lot of fun and excitement for cosmic fighting fans. Here are some of the reasons why you should play it:
- The pros and cons of Z Legends APK
-Z Legends APK has its pros and cons, like any other game. Here are some of them:
-z legends apk download
-z legends apk latest version
-z legends apk update
-z legends apk free
-z legends apk mod
-z legends apk hack
-z legends apk offline
-z legends apk online
-z legends apk for android
-z legends apk for pc
-z legends apk for ios
-z legends apk for windows
-z legends apk for mac
-z legends apk for linux
-z legends apk for firestick
-z legends apk gameplay
-z legends apk review
-z legends apk rating
-z legends apk size
-z legends apk features
-z legends apk tips
-z legends apk tricks
-z legends apk cheats
-z legends apk guide
-z legends apk walkthrough
-z legends apk dragon ball
-z legends apk stickman warriors
-z legends apk super saiyan
-z legends apk dragon warriors
-z legends apk legend fighters
-z legends apk adventure game
-z legends apk action game
-z legends apk role playing game
-z legends apk arcade game
-z legends apk casual game
-z legends apk strategy game
-z legends apk sports game
-z legends apk simulation game
-z legends apk racing game
-z legends apk puzzle game
-z legends apk card game
-z legends apk music game
-z legends apk board game
-z legends apk educational game
-z legends apk trivia game
-z legends apk word game
-z legends APKCombo[^1^]
-
-Pros | Cons |
-It has high-quality graphics and sound effects that make the game realistic and immersive. | It may have some bugs and glitches that affect the game performance and experience. |
-It has simple and intuitive controls that make the game easy to play and enjoy. | It may have some compatibility issues with some devices and models. |
-It has various modes that offer different challenges and rewards for different preferences and tastes. | It may have some balance issues with some characters and moves that make the game unfair or boring. |
-It has a multiplayer mode that allows you to play with other players online and have fun together. | It may have some connection issues with some servers and regions that affect the game stability and quality. |
-It has a customization option that allows you to personalize your character and make it unique. | It may have some in-app purchases that require real money to unlock some features or items. |
-
- The ratings and reviews of Z Legends APK
-Z Legends APK has received positive ratings and reviews from many players who have tried it. Here are some of them:
-"This game is awesome! I love the graphics, the sound, and the gameplay. It feels like I'm watching the series again. The characters are amazing and their moves are epic. I recommend this game to anyone who likes cosmic fighting games."
-"This game is very good. I like the controls, the modes, and the online mode. It is easy to play and fun to win. The characters are cool and their transformations are awesome. I enjoy playing this game with my friends."
-"This game is decent. I like the concept, the story, and the customization option. It is interesting to play and challenging to master. The characters are nice and their special moves are impressive. I wish the game had more characters and stages."
- Conclusion
-Z Legends APK is a cosmic fighting game for Android that lets you play as your favorite character from the popular series and fight against other cosmic warriors in various environments and situations. You can enjoy the high-quality graphics, the realistic sound effects, the simple controls, the various modes, the multiplayer mode, and the customization option of this game. You can also unlock new characters, stages, and features as you progress in the game. Z Legends APK is a game that is worth trying if you are a fan of cosmic fighting games.
- FAQs
-Here are some of the frequently asked questions about Z Legends APK:
-
-- Q: Is Z Legends APK safe to download and install?
-- A: Yes, Z Legends APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it before installing it.
-- Q: Is Z Legends APK free to play?
-- A: Yes, Z Legends APK is free to play. You can download and install it without paying any money. However, it may have some in-app purchases that require real money to unlock some features or items.
-- Q: Is Z Legends APK compatible with my device?
-- A: Z Legends APK is compatible with most Android devices that have version 4.4 or higher. However, it may not work well on some devices or models due to compatibility issues. You can check the requirements and the compatibility list on the official website of Z Legends APK.
-- Q: How can I update Z Legends APK?
-- A: You can update Z Legends APK by downloading and installing the latest version from the official website of Z Legends APK or from other sources. You can also check for updates in the game settings or in the app store.
-- Q: How can I contact the developers of Z Legends APK?
-- A: You can contact the developers of Z Legends APK by sending them an email at . You can also follow them on their social media accounts or visit their website for more information.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py
deleted file mode 100644
index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py
+++ /dev/null
@@ -1,316 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-
-from .utils import constant_init, kaiming_init
-
-
-def conv3x3(in_planes, out_planes, stride=1, dilation=1):
- """3x3 convolution with padding."""
- return nn.Conv2d(
- in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- super(BasicBlock, self).__init__()
- assert style in ['pytorch', 'caffe']
- self.conv1 = conv3x3(inplanes, planes, stride, dilation)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- assert not with_cp
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- """Bottleneck block.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__()
- assert style in ['pytorch', 'caffe']
- if style == 'pytorch':
- conv1_stride = 1
- conv2_stride = stride
- else:
- conv1_stride = stride
- conv2_stride = 1
- self.conv1 = nn.Conv2d(
- inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False)
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
- self.bn1 = nn.BatchNorm2d(planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(
- planes, planes * self.expansion, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- self.with_cp = with_cp
-
- def forward(self, x):
-
- def _inner_forward(x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-def make_res_layer(block,
- inplanes,
- planes,
- blocks,
- stride=1,
- dilation=1,
- style='pytorch',
- with_cp=False):
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- dilation,
- downsample,
- style=style,
- with_cp=with_cp))
- inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(
- block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp))
-
- return nn.Sequential(*layers)
-
-
-class ResNet(nn.Module):
- """ResNet backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- num_stages (int): Resnet stages, normally 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze
- running stats (mean and var).
- bn_frozen (bool): Whether to freeze weight and bias of BN layers.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- """
-
- arch_settings = {
- 18: (BasicBlock, (2, 2, 2, 2)),
- 34: (BasicBlock, (3, 4, 6, 3)),
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- depth,
- num_stages=4,
- strides=(1, 2, 2, 2),
- dilations=(1, 1, 1, 1),
- out_indices=(0, 1, 2, 3),
- style='pytorch',
- frozen_stages=-1,
- bn_eval=True,
- bn_frozen=False,
- with_cp=False):
- super(ResNet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for resnet')
- assert num_stages >= 1 and num_stages <= 4
- block, stage_blocks = self.arch_settings[depth]
- stage_blocks = stage_blocks[:num_stages]
- assert len(strides) == len(dilations) == num_stages
- assert max(out_indices) < num_stages
-
- self.out_indices = out_indices
- self.style = style
- self.frozen_stages = frozen_stages
- self.bn_eval = bn_eval
- self.bn_frozen = bn_frozen
- self.with_cp = with_cp
-
- self.inplanes = 64
- self.conv1 = nn.Conv2d(
- 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.res_layers = []
- for i, num_blocks in enumerate(stage_blocks):
- stride = strides[i]
- dilation = dilations[i]
- planes = 64 * 2**i
- res_layer = make_res_layer(
- block,
- self.inplanes,
- planes,
- num_blocks,
- stride=stride,
- dilation=dilation,
- style=self.style,
- with_cp=with_cp)
- self.inplanes = planes * block.expansion
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, res_layer)
- self.res_layers.append(layer_name)
-
- self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- from ..runner import load_checkpoint
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- outs = []
- for i, layer_name in enumerate(self.res_layers):
- res_layer = getattr(self, layer_name)
- x = res_layer(x)
- if i in self.out_indices:
- outs.append(x)
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def train(self, mode=True):
- super(ResNet, self).train(mode)
- if self.bn_eval:
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
- if self.bn_frozen:
- for params in m.parameters():
- params.requires_grad = False
- if mode and self.frozen_stages >= 0:
- for param in self.conv1.parameters():
- param.requires_grad = False
- for param in self.bn1.parameters():
- param.requires_grad = False
- self.bn1.eval()
- self.bn1.weight.requires_grad = False
- self.bn1.bias.requires_grad = False
- for i in range(1, self.frozen_stages + 1):
- mod = getattr(self, f'layer{i}')
- mod.eval()
- for param in mod.parameters():
- param.requires_grad = False
diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/__init__.py b/spaces/cymic/Talking_Head_Anime_3/tha3/nn/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py b/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py
deleted file mode 100644
index 939eb887ee371a2685e71e17bffded7ae8c08b34..0000000000000000000000000000000000000000
--- a/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import torch
-from .base_model import BaseModel
-from . import networks
-
-
-class Pix2PixModel(BaseModel):
- """ This class implements the pix2pix model, for learning a mapping from input images to output images given paired data.
-
- The model training requires '--dataset_mode aligned' dataset.
- By default, it uses a '--netG unet256' U-Net generator,
- a '--netD basic' discriminator (PatchGAN),
- and a '--gan_mode' vanilla GAN loss (the cross-entropy objective used in the orignal GAN paper).
-
- pix2pix paper: https://arxiv.org/pdf/1611.07004.pdf
- """
- @staticmethod
- def modify_commandline_options(parser, is_train=True):
- """Add new dataset-specific options, and rewrite default values for existing options.
-
- Parameters:
- parser -- original option parser
- is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
-
- For pix2pix, we do not use image buffer
- The training objective is: GAN Loss + lambda_L1 * ||G(A)-B||_1
- By default, we use vanilla GAN loss, UNet with batchnorm, and aligned datasets.
- """
- # changing the default values to match the pix2pix paper (https://phillipi.github.io/pix2pix/)
- parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned')
- if is_train:
- parser.set_defaults(pool_size=0, gan_mode='vanilla')
- parser.add_argument('--lambda_L1', type=float, default=100.0, help='weight for L1 loss')
-
- return parser
-
- def __init__(self, opt):
- """Initialize the pix2pix class.
-
- Parameters:
- opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- BaseModel.__init__(self, opt)
- # specify the training losses you want to print out. The training/test scripts will call
- self.loss_names = ['G_GAN', 'G_L1', 'D_real', 'D_fake']
- # specify the images you want to save/display. The training/test scripts will call
- self.visual_names = ['real_A', 'fake_B', 'real_B']
- # specify the models you want to save to the disk. The training/test scripts will call and
- if self.isTrain:
- self.model_names = ['G', 'D']
- else: # during test time, only load G
- self.model_names = ['G']
- # define networks (both generator and discriminator)
- self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.norm,
- not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
-
- if self.isTrain: # define a discriminator; conditional GANs need to take both input and output images; Therefore, #channels for D is input_nc + output_nc
- self.netD = networks.define_D(opt.input_nc + opt.output_nc, opt.ndf, opt.netD,
- opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
-
- if self.isTrain:
- # define loss functions
- self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device)
- self.criterionL1 = torch.nn.L1Loss()
- # initialize optimizers; schedulers will be automatically created by function .
- self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
- self.optimizer_D = torch.optim.Adam(self.netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
- self.optimizers.append(self.optimizer_G)
- self.optimizers.append(self.optimizer_D)
-
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input (dict): include the data itself and its metadata information.
-
- The option 'direction' can be used to swap images in domain A and domain B.
- """
- AtoB = self.opt.direction == 'AtoB'
- self.real_A = input['A' if AtoB else 'B'].to(self.device)
- self.real_B = input['B' if AtoB else 'A'].to(self.device)
- self.image_paths = input['A_paths' if AtoB else 'B_paths']
-
- def forward(self):
- """Run forward pass; called by both functions and ."""
- self.fake_B = self.netG(self.real_A) # G(A)
-
- def backward_D(self):
- """Calculate GAN loss for the discriminator"""
- # Fake; stop backprop to the generator by detaching fake_B
- fake_AB = torch.cat((self.real_A, self.fake_B), 1) # we use conditional GANs; we need to feed both input and output to the discriminator
- pred_fake = self.netD(fake_AB.detach())
- self.loss_D_fake = self.criterionGAN(pred_fake, False)
- # Real
- real_AB = torch.cat((self.real_A, self.real_B), 1)
- pred_real = self.netD(real_AB)
- self.loss_D_real = self.criterionGAN(pred_real, True)
- # combine loss and calculate gradients
- self.loss_D = (self.loss_D_fake + self.loss_D_real) * 0.5
- self.loss_D.backward()
-
- def backward_G(self):
- """Calculate GAN and L1 loss for the generator"""
- # First, G(A) should fake the discriminator
- fake_AB = torch.cat((self.real_A, self.fake_B), 1)
- pred_fake = self.netD(fake_AB)
- self.loss_G_GAN = self.criterionGAN(pred_fake, True)
- # Second, G(A) = B
- self.loss_G_L1 = self.criterionL1(self.fake_B, self.real_B) * self.opt.lambda_L1
- # combine loss and calculate gradients
- self.loss_G = self.loss_G_GAN + self.loss_G_L1
- self.loss_G.backward()
-
- def optimize_parameters(self):
- self.forward() # compute fake images: G(A)
- # update D
- self.set_requires_grad(self.netD, True) # enable backprop for D
- self.optimizer_D.zero_grad() # set D's gradients to zero
- self.backward_D() # calculate gradients for D
- self.optimizer_D.step() # update D's weights
- # update G
- self.set_requires_grad(self.netD, False) # D requires no gradients when optimizing G
- self.optimizer_G.zero_grad() # set G's gradients to zero
- self.backward_G() # calculate graidents for G
- self.optimizer_G.step() # udpate G's weights
diff --git a/spaces/davila7/semantic-search/README.md b/spaces/davila7/semantic-search/README.md
deleted file mode 100644
index c1d44e2785ac1478b66772545937d05fcc2a095a..0000000000000000000000000000000000000000
--- a/spaces/davila7/semantic-search/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Semantic Search
-emoji: 🔍
-colorFrom: red
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py
deleted file mode 100644
index 3209a5d7b82c7ff0776dcae55e92c3cf816553a7..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# Copyright 2013 Google, Inc. All Rights Reserved.
-#
-# Google Author(s): Behdad Esfahbod, Roozbeh Pournader
-
-from fontTools.merge.unicode import is_Default_Ignorable
-from fontTools.pens.recordingPen import DecomposingRecordingPen
-import logging
-
-
-log = logging.getLogger("fontTools.merge")
-
-
-def computeMegaGlyphOrder(merger, glyphOrders):
- """Modifies passed-in glyphOrders to reflect new glyph names.
- Stores merger.glyphOrder."""
- megaOrder = {}
- for glyphOrder in glyphOrders:
- for i, glyphName in enumerate(glyphOrder):
- if glyphName in megaOrder:
- n = megaOrder[glyphName]
- while (glyphName + "." + repr(n)) in megaOrder:
- n += 1
- megaOrder[glyphName] = n
- glyphName += "." + repr(n)
- glyphOrder[i] = glyphName
- megaOrder[glyphName] = 1
- merger.glyphOrder = megaOrder = list(megaOrder.keys())
-
-
-def _glyphsAreSame(
- glyphSet1,
- glyphSet2,
- glyph1,
- glyph2,
- advanceTolerance=0.05,
- advanceToleranceEmpty=0.20,
-):
- pen1 = DecomposingRecordingPen(glyphSet1)
- pen2 = DecomposingRecordingPen(glyphSet2)
- g1 = glyphSet1[glyph1]
- g2 = glyphSet2[glyph2]
- g1.draw(pen1)
- g2.draw(pen2)
- if pen1.value != pen2.value:
- return False
- # Allow more width tolerance for glyphs with no ink
- tolerance = advanceTolerance if pen1.value else advanceToleranceEmpty
- # TODO Warn if advances not the same but within tolerance.
- if abs(g1.width - g2.width) > g1.width * tolerance:
- return False
- if hasattr(g1, "height") and g1.height is not None:
- if abs(g1.height - g2.height) > g1.height * tolerance:
- return False
- return True
-
-
-# Valid (format, platformID, platEncID) triplets for cmap subtables containing
-# Unicode BMP-only and Unicode Full Repertoire semantics.
-# Cf. OpenType spec for "Platform specific encodings":
-# https://docs.microsoft.com/en-us/typography/opentype/spec/name
-class _CmapUnicodePlatEncodings:
- BMP = {(4, 3, 1), (4, 0, 3), (4, 0, 4), (4, 0, 6)}
- FullRepertoire = {(12, 3, 10), (12, 0, 4), (12, 0, 6)}
-
-
-def computeMegaCmap(merger, cmapTables):
- """Sets merger.cmap and merger.glyphOrder."""
-
- # TODO Handle format=14.
- # Only merge format 4 and 12 Unicode subtables, ignores all other subtables
- # If there is a format 12 table for a font, ignore the format 4 table of it
- chosenCmapTables = []
- for fontIdx, table in enumerate(cmapTables):
- format4 = None
- format12 = None
- for subtable in table.tables:
- properties = (subtable.format, subtable.platformID, subtable.platEncID)
- if properties in _CmapUnicodePlatEncodings.BMP:
- format4 = subtable
- elif properties in _CmapUnicodePlatEncodings.FullRepertoire:
- format12 = subtable
- else:
- log.warning(
- "Dropped cmap subtable from font '%s':\t"
- "format %2s, platformID %2s, platEncID %2s",
- fontIdx,
- subtable.format,
- subtable.platformID,
- subtable.platEncID,
- )
- if format12 is not None:
- chosenCmapTables.append((format12, fontIdx))
- elif format4 is not None:
- chosenCmapTables.append((format4, fontIdx))
-
- # Build the unicode mapping
- merger.cmap = cmap = {}
- fontIndexForGlyph = {}
- glyphSets = [None for f in merger.fonts] if hasattr(merger, "fonts") else None
-
- for table, fontIdx in chosenCmapTables:
- # handle duplicates
- for uni, gid in table.cmap.items():
- oldgid = cmap.get(uni, None)
- if oldgid is None:
- cmap[uni] = gid
- fontIndexForGlyph[gid] = fontIdx
- elif is_Default_Ignorable(uni) or uni in (0x25CC,): # U+25CC DOTTED CIRCLE
- continue
- elif oldgid != gid:
- # Char previously mapped to oldgid, now to gid.
- # Record, to fix up in GSUB 'locl' later.
- if merger.duplicateGlyphsPerFont[fontIdx].get(oldgid) is None:
- if glyphSets is not None:
- oldFontIdx = fontIndexForGlyph[oldgid]
- for idx in (fontIdx, oldFontIdx):
- if glyphSets[idx] is None:
- glyphSets[idx] = merger.fonts[idx].getGlyphSet()
- # if _glyphsAreSame(glyphSets[oldFontIdx], glyphSets[fontIdx], oldgid, gid):
- # continue
- merger.duplicateGlyphsPerFont[fontIdx][oldgid] = gid
- elif merger.duplicateGlyphsPerFont[fontIdx][oldgid] != gid:
- # Char previously mapped to oldgid but oldgid is already remapped to a different
- # gid, because of another Unicode character.
- # TODO: Try harder to do something about these.
- log.warning(
- "Dropped mapping from codepoint %#06X to glyphId '%s'", uni, gid
- )
-
-
-def renameCFFCharStrings(merger, glyphOrder, cffTable):
- """Rename topDictIndex charStrings based on glyphOrder."""
- td = cffTable.cff.topDictIndex[0]
-
- charStrings = {}
- for i, v in enumerate(td.CharStrings.charStrings.values()):
- glyphName = glyphOrder[i]
- charStrings[glyphName] = v
- td.CharStrings.charStrings = charStrings
-
- td.charset = list(glyphOrder)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py
deleted file mode 100644
index cafa312cdaa4696b0624438e06418ade95438441..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""
-PostScript Type 1 fonts make use of two types of encryption: charstring
-encryption and ``eexec`` encryption. Charstring encryption is used for
-the charstrings themselves, while ``eexec`` is used to encrypt larger
-sections of the font program, such as the ``Private`` and ``CharStrings``
-dictionaries. Despite the different names, the algorithm is the same,
-although ``eexec`` encryption uses a fixed initial key R=55665.
-
-The algorithm uses cipher feedback, meaning that the ciphertext is used
-to modify the key. Because of this, the routines in this module return
-the new key at the end of the operation.
-
-"""
-
-from fontTools.misc.textTools import bytechr, bytesjoin, byteord
-
-
-def _decryptChar(cipher, R):
- cipher = byteord(cipher)
- plain = ((cipher ^ (R >> 8))) & 0xFF
- R = ((cipher + R) * 52845 + 22719) & 0xFFFF
- return bytechr(plain), R
-
-
-def _encryptChar(plain, R):
- plain = byteord(plain)
- cipher = ((plain ^ (R >> 8))) & 0xFF
- R = ((cipher + R) * 52845 + 22719) & 0xFFFF
- return bytechr(cipher), R
-
-
-def decrypt(cipherstring, R):
- r"""
- Decrypts a string using the Type 1 encryption algorithm.
-
- Args:
- cipherstring: String of ciphertext.
- R: Initial key.
-
- Returns:
- decryptedStr: Plaintext string.
- R: Output key for subsequent decryptions.
-
- Examples::
-
- >>> testStr = b"\0\0asdadads asds\265"
- >>> decryptedStr, R = decrypt(testStr, 12321)
- >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- True
- >>> R == 36142
- True
- """
- plainList = []
- for cipher in cipherstring:
- plain, R = _decryptChar(cipher, R)
- plainList.append(plain)
- plainstring = bytesjoin(plainList)
- return plainstring, int(R)
-
-
-def encrypt(plainstring, R):
- r"""
- Encrypts a string using the Type 1 encryption algorithm.
-
- Note that the algorithm as described in the Type 1 specification requires the
- plaintext to be prefixed with a number of random bytes. (For ``eexec`` the
- number of random bytes is set to 4.) This routine does *not* add the random
- prefix to its input.
-
- Args:
- plainstring: String of plaintext.
- R: Initial key.
-
- Returns:
- cipherstring: Ciphertext string.
- R: Output key for subsequent encryptions.
-
- Examples::
-
- >>> testStr = b"\0\0asdadads asds\265"
- >>> decryptedStr, R = decrypt(testStr, 12321)
- >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- True
- >>> R == 36142
- True
-
- >>> testStr = b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1'
- >>> encryptedStr, R = encrypt(testStr, 12321)
- >>> encryptedStr == b"\0\0asdadads asds\265"
- True
- >>> R == 36142
- True
- """
- cipherList = []
- for plain in plainstring:
- cipher, R = _encryptChar(plain, R)
- cipherList.append(cipher)
- cipherstring = bytesjoin(cipherList)
- return cipherstring, int(R)
-
-
-def hexString(s):
- import binascii
-
- return binascii.hexlify(s)
-
-
-def deHexString(h):
- import binascii
-
- h = bytesjoin(h.split())
- return binascii.unhexlify(h)
-
-
-if __name__ == "__main__":
- import sys
- import doctest
-
- sys.exit(doctest.testmod().failed)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py
deleted file mode 100644
index 8a579ae4c93f824b5ce3a5e80097aeffd5f5933d..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py
+++ /dev/null
@@ -1,192 +0,0 @@
-"""fontTools.pens.pointInsidePen -- Pen implementing "point inside" testing
-for shapes.
-"""
-
-from fontTools.pens.basePen import BasePen
-from fontTools.misc.bezierTools import solveQuadratic, solveCubic
-
-
-__all__ = ["PointInsidePen"]
-
-
-class PointInsidePen(BasePen):
-
- """This pen implements "point inside" testing: to test whether
- a given point lies inside the shape (black) or outside (white).
- Instances of this class can be recycled, as long as the
- setTestPoint() method is used to set the new point to test.
-
- Typical usage:
-
- pen = PointInsidePen(glyphSet, (100, 200))
- outline.draw(pen)
- isInside = pen.getResult()
-
- Both the even-odd algorithm and the non-zero-winding-rule
- algorithm are implemented. The latter is the default, specify
- True for the evenOdd argument of __init__ or setTestPoint
- to use the even-odd algorithm.
- """
-
- # This class implements the classical "shoot a ray from the test point
- # to infinity and count how many times it intersects the outline" (as well
- # as the non-zero variant, where the counter is incremented if the outline
- # intersects the ray in one direction and decremented if it intersects in
- # the other direction).
- # I found an amazingly clear explanation of the subtleties involved in
- # implementing this correctly for polygons here:
- # http://graphics.cs.ucdavis.edu/~okreylos/TAship/Spring2000/PointInPolygon.html
- # I extended the principles outlined on that page to curves.
-
- def __init__(self, glyphSet, testPoint, evenOdd=False):
- BasePen.__init__(self, glyphSet)
- self.setTestPoint(testPoint, evenOdd)
-
- def setTestPoint(self, testPoint, evenOdd=False):
- """Set the point to test. Call this _before_ the outline gets drawn."""
- self.testPoint = testPoint
- self.evenOdd = evenOdd
- self.firstPoint = None
- self.intersectionCount = 0
-
- def getWinding(self):
- if self.firstPoint is not None:
- # always make sure the sub paths are closed; the algorithm only works
- # for closed paths.
- self.closePath()
- return self.intersectionCount
-
- def getResult(self):
- """After the shape has been drawn, getResult() returns True if the test
- point lies within the (black) shape, and False if it doesn't.
- """
- winding = self.getWinding()
- if self.evenOdd:
- result = winding % 2
- else: # non-zero
- result = self.intersectionCount != 0
- return not not result
-
- def _addIntersection(self, goingUp):
- if self.evenOdd or goingUp:
- self.intersectionCount += 1
- else:
- self.intersectionCount -= 1
-
- def _moveTo(self, point):
- if self.firstPoint is not None:
- # always make sure the sub paths are closed; the algorithm only works
- # for closed paths.
- self.closePath()
- self.firstPoint = point
-
- def _lineTo(self, point):
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = point
-
- if x1 < x and x2 < x:
- return
- if y1 < y and y2 < y:
- return
- if y1 >= y and y2 >= y:
- return
-
- dx = x2 - x1
- dy = y2 - y1
- t = (y - y1) / dy
- ix = dx * t + x1
- if ix < x:
- return
- self._addIntersection(y2 > y1)
-
- def _curveToOne(self, bcp1, bcp2, point):
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = bcp1
- x3, y3 = bcp2
- x4, y4 = point
-
- if x1 < x and x2 < x and x3 < x and x4 < x:
- return
- if y1 < y and y2 < y and y3 < y and y4 < y:
- return
- if y1 >= y and y2 >= y and y3 >= y and y4 >= y:
- return
-
- dy = y1
- cy = (y2 - dy) * 3.0
- by = (y3 - y2) * 3.0 - cy
- ay = y4 - dy - cy - by
- solutions = sorted(solveCubic(ay, by, cy, dy - y))
- solutions = [t for t in solutions if -0.0 <= t <= 1.0]
- if not solutions:
- return
-
- dx = x1
- cx = (x2 - dx) * 3.0
- bx = (x3 - x2) * 3.0 - cx
- ax = x4 - dx - cx - bx
-
- above = y1 >= y
- lastT = None
- for t in solutions:
- if t == lastT:
- continue
- lastT = t
- t2 = t * t
- t3 = t2 * t
-
- direction = 3 * ay * t2 + 2 * by * t + cy
- incomingGoingUp = outgoingGoingUp = direction > 0.0
- if direction == 0.0:
- direction = 6 * ay * t + 2 * by
- outgoingGoingUp = direction > 0.0
- incomingGoingUp = not outgoingGoingUp
- if direction == 0.0:
- direction = ay
- incomingGoingUp = outgoingGoingUp = direction > 0.0
-
- xt = ax * t3 + bx * t2 + cx * t + dx
- if xt < x:
- continue
-
- if t in (0.0, -0.0):
- if not outgoingGoingUp:
- self._addIntersection(outgoingGoingUp)
- elif t == 1.0:
- if incomingGoingUp:
- self._addIntersection(incomingGoingUp)
- else:
- if incomingGoingUp == outgoingGoingUp:
- self._addIntersection(outgoingGoingUp)
- # else:
- # we're not really intersecting, merely touching
-
- def _qCurveToOne_unfinished(self, bcp, point):
- # XXX need to finish this, for now doing it through a cubic
- # (BasePen implements _qCurveTo in terms of a cubic) will
- # have to do.
- x, y = self.testPoint
- x1, y1 = self._getCurrentPoint()
- x2, y2 = bcp
- x3, y3 = point
- c = y1
- b = (y2 - c) * 2.0
- a = y3 - c - b
- solutions = sorted(solveQuadratic(a, b, c - y))
- solutions = [
- t for t in solutions if ZERO_MINUS_EPSILON <= t <= ONE_PLUS_EPSILON
- ]
- if not solutions:
- return
- # XXX
-
- def _closePath(self):
- if self._getCurrentPoint() != self.firstPoint:
- self.lineTo(self.firstPoint)
- self.firstPoint = None
-
- def _endPath(self):
- """Insideness is not defined for open contours."""
- raise NotImplementedError
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js
deleted file mode 100644
index 480cbcd8467c061d7db47bac472ba8bdc0f54846..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js
+++ /dev/null
@@ -1,4 +0,0 @@
-import{S as F,e as G,s as K,f as A,g as u,h as b,j as y,n as z,k,m as p,C as ce,av as Q,I as C,o as M,Z as Y,t as Z,x as q,p as T,B as ke,P,Y as B,K as D,F as V,G as I,w as j,u as H,H as O,V as ve,ae as pe,Q as we,R as je,r as J,v as U,E as ye}from"./index-9e76ffee.js";import{g as He}from"./color-5a2b6a59.js";import{B as Ne}from"./Button-30a08c0b.js";import{B as Be}from"./BlockLabel-9545c6da.js";import{E as Ce}from"./Empty-8e3485c0.js";function Me(s){let e,t,l;return{c(){e=A("svg"),t=A("path"),l=A("path"),u(t,"fill","currentColor"),u(t,"d","M12 15H5a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5V5a1 1 0 0 0-1-1H3V2h6a3 3 0 0 1 3 3zM5 9a1 1 0 0 0-1 1v2a1 1 0 0 0 1 1h5V9zm15 14v2a1 1 0 0 0 1 1h5v-4h-5a1 1 0 0 0-1 1z"),u(l,"fill","currentColor"),u(l,"d","M2 30h28V2Zm26-2h-7a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5v-2a1 1 0 0 0-1-1h-6v-2h6a3 3 0 0 1 3 3Z"),u(e,"xmlns","http://www.w3.org/2000/svg"),u(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),u(e,"aria-hidden","true"),u(e,"role","img"),u(e,"class","iconify iconify--carbon"),u(e,"width","100%"),u(e,"height","100%"),u(e,"preserveAspectRatio","xMidYMid meet"),u(e,"viewBox","0 0 32 32")},m(i,a){b(i,e,a),y(e,t),y(e,l)},p:z,i:z,o:z,d(i){i&&k(e)}}}class ue extends F{constructor(e){super(),G(this,e,null,Me,K,{})}}function W(s,e,t){const l=s.slice();l[19]=e[t][0],l[28]=e[t][1];const i=typeof l[28]=="string"?parseInt(l[28]):l[28];return l[29]=i,l}function X(s,e,t){const l=s.slice();return l[19]=e[t][0],l[20]=e[t][1],l[22]=t,l}function $(s,e,t){const l=s.slice();return l[23]=e[t],l[25]=t,l}function x(s,e,t){const l=s.slice();return l[20]=e[t][0],l[26]=e[t][1],l[22]=t,l}function Se(s){let e,t,l=s[1]&&ee(),i=C(s[0]),a=[];for(let n=0;n-1 0 +1",u(e,"class","color-legend svelte-taudaj"),u(e,"data-testid","highlighted-text:color-legend")},m(t,l){b(t,e,l)},d(t){t&&k(e)}}}function le(s){let e,t,l=s[19]+"",i,a,n;return{c(){e=p("span"),t=p("span"),i=Z(l),a=M(),u(t,"class","text svelte-taudaj"),u(e,"class","textspan score-text svelte-taudaj"),u(e,"style",n="background-color: rgba("+(s[29]<0?"128, 90, 213,"+-s[29]:"239, 68, 60,"+s[29])+")")},m(f,o){b(f,e,o),y(e,t),y(t,i),y(e,a)},p(f,o){o[0]&1&&l!==(l=f[19]+"")&&q(i,l),o[0]&1&&n!==(n="background-color: rgba("+(f[29]<0?"128, 90, 213,"+-f[29]:"239, 68, 60,"+f[29])+")")&&u(e,"style",n)},d(f){f&&k(e)}}}function te(s){let e,t=C(Object.entries(s[3])),l=[];for(let i=0;ig(m),_=m=>g(m),_e=()=>S(),me=()=>S(),de=(m,w,R)=>{N("select",{index:m,value:[w,R]})};return s.$$set=m=>{"value"in m&&t(0,i=m.value),"show_legend"in m&&t(1,a=m.show_legend),"color_map"in m&&t(9,n=m.color_map),"selectable"in m&&t(2,f=m.selectable)},s.$$.update=()=>{if(s.$$.dirty[0]&513){if(n||t(9,n={}),i.length>0){for(let[m,w]of i)if(w!==null)if(typeof w=="string"){if(t(5,c="categories"),!(w in n)){let R=He(Object.keys(n).length);t(9,n[w]=R,n)}}else t(5,c="scores")}h()}},[i,a,f,d,r,c,N,g,S,n,E,_,_e,me,de]}class Oe extends F{constructor(e){super(),G(this,e,Ie,Ve,K,{value:0,show_legend:1,color_map:9,selectable:2},null,[-1,-1])}}function re(s){let e,t;return e=new Be({props:{Icon:ue,label:s[6],float:!1,disable:s[7]===!1}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&64&&(a.label=l[6]),i&128&&(a.disable=l[7]===!1),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Re(s){let e,t;return e=new Ce({props:{$$slots:{default:[ze]},$$scope:{ctx:s}}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&32768&&(a.$$scope={dirty:i,ctx:l}),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Te(s){let e,t;return e=new Oe({props:{selectable:s[10],value:s[4],show_legend:s[5],color_map:s[0]}}),e.$on("select",s[13]),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&1024&&(a.selectable=l[10]),i&16&&(a.value=l[4]),i&32&&(a.show_legend=l[5]),i&1&&(a.color_map=l[0]),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function ze(s){let e,t;return e=new ue({}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Ze(s){let e,t,l,i,a,n,f;const o=[s[11]];let d={};for(let c=0;c{r=null}),U());let E=i;i=N(c),i===E?v[i].p(c,g):(J(),H(v[E],1,1,()=>{v[E]=null}),U(),a=v[i],a?a.p(c,g):(a=v[i]=h[i](c),a.c()),j(a,1),a.m(n.parentNode,n))},i(c){f||(j(e.$$.fragment,c),j(r),j(a),f=!0)},o(c){H(e.$$.fragment,c),H(r),H(a),f=!1},d(c){c&&(k(t),k(l),k(n)),O(e,c),r&&r.d(c),v[i].d(c)}}}function De(s){let e,t;return e=new Ne({props:{test_id:"highlighted-text",visible:s[3],elem_id:s[1],elem_classes:s[2],padding:!1,container:s[7],scale:s[8],min_width:s[9],$$slots:{default:[Ze]},$$scope:{ctx:s}}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,[i]){const a={};i&8&&(a.visible=l[3]),i&2&&(a.elem_id=l[1]),i&4&&(a.elem_classes=l[2]),i&128&&(a.container=l[7]),i&256&&(a.scale=l[8]),i&512&&(a.min_width=l[9]),i&36081&&(a.$$scope={dirty:i,ctx:l}),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Le(s,e,t){let{elem_id:l=""}=e,{elem_classes:i=[]}=e,{visible:a=!0}=e,{value:n}=e,f,{show_legend:o}=e,{color_map:d={}}=e,{label:r="Highlighted Text"}=e,{container:h=!0}=e,{scale:v=null}=e,{min_width:N=void 0}=e,{selectable:c=!1}=e,{loading_status:g}=e;const S=ce();function E(_){ye.call(this,s,_)}return s.$$set=_=>{"elem_id"in _&&t(1,l=_.elem_id),"elem_classes"in _&&t(2,i=_.elem_classes),"visible"in _&&t(3,a=_.visible),"value"in _&&t(4,n=_.value),"show_legend"in _&&t(5,o=_.show_legend),"color_map"in _&&t(0,d=_.color_map),"label"in _&&t(6,r=_.label),"container"in _&&t(7,h=_.container),"scale"in _&&t(8,v=_.scale),"min_width"in _&&t(9,N=_.min_width),"selectable"in _&&t(10,c=_.selectable),"loading_status"in _&&t(11,g=_.loading_status)},s.$$.update=()=>{s.$$.dirty&1&&!d&&Object.keys(d).length&&t(0,d),s.$$.dirty&4112&&n!==f&&(t(12,f=n),S("change"))},[d,l,i,a,n,o,r,h,v,N,c,g,f,E]}class Ye extends F{constructor(e){super(),G(this,e,Le,De,K,{elem_id:1,elem_classes:2,visible:3,value:4,show_legend:5,color_map:0,label:6,container:7,scale:8,min_width:9,selectable:10,loading_status:11})}}const Pe=Ye,Qe=["static"];export{Pe as Component,Qe as modes};
-//# sourceMappingURL=index-2e429704.js.map
diff --git a/spaces/dcq/freegpt-webui/run.py b/spaces/dcq/freegpt-webui/run.py
deleted file mode 100644
index 1de4452d6118de6bdb58a591018440e829180390..0000000000000000000000000000000000000000
--- a/spaces/dcq/freegpt-webui/run.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from server.app import app
-from server.website import Website
-from server.backend import Backend_Api
-from json import load
-
-
-if __name__ == '__main__':
-
- # Load configuration from config.json
- config = load(open('config.json', 'r'))
- site_config = config['site_config']
-
- # Set up the website routes
- site = Website(app)
- for route in site.routes:
- app.add_url_rule(
- route,
- view_func=site.routes[route]['function'],
- methods=site.routes[route]['methods'],
- )
-
- # Set up the backend API routes
- backend_api = Backend_Api(app, config)
- for route in backend_api.routes:
- app.add_url_rule(
- route,
- view_func=backend_api.routes[route]['function'],
- methods=backend_api.routes[route]['methods'],
- )
-
- # Run the Flask server
- print(f"Running on port {site_config['port']}")
- app.run(**site_config)
- print(f"Closing port {site_config['port']}")
diff --git a/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md b/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md
deleted file mode 100644
index d53f17114404be5c7790802b364d1a7bdb0cb99f..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md
+++ /dev/null
@@ -1,464 +0,0 @@
-# DreamBooth training example
-
-[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject.
-The `train_dreambooth.py` script shows how to implement the training procedure and adapt it for stable diffusion.
-
-
-## Running locally with PyTorch
-
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install -e .
-```
-
-Then cd in the example folder and run
-```bash
-pip install -r requirements.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-Or for a default accelerate configuration without answering questions about your environment
-
-```bash
-accelerate config default
-```
-
-Or if your environment doesn't support an interactive shell e.g. a notebook
-
-```python
-from accelerate.utils import write_basic_config
-write_basic_config()
-```
-
-### Dog toy example
-
-Now let's get our dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. This will be our training data.
-
-And launch the training using
-
-**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=400
-```
-
-### Training with prior-preservation loss
-
-Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
-According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. The `num_class_images` flag sets the number of images to generate with the class prompt. You can place existing images in `class_data_dir`, and the training script will generate any additional images so that `num_class_images` are present in `class_data_dir` during training time.
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Training on a 16GB GPU:
-
-With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU.
-
-To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation).
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=2 --gradient_checkpointing \
- --use_8bit_adam \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Training on a 12GB GPU:
-
-It is possible to run dreambooth on a 12GB GPU by using the following optimizations:
-- [gradient checkpointing and the 8-bit optimizer](#training-on-a-16gb-gpu)
-- [xformers](#training-with-xformers)
-- [setting grads to none](#set-grads-to-none)
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --use_8bit_adam \
- --enable_xformers_memory_efficient_attention \
- --set_grads_to_none \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Training on a 8 GB GPU:
-
-By using [DeepSpeed](https://www.deepspeed.ai/) it's possible to offload some
-tensors from VRAM to either CPU or NVME allowing to train with less VRAM.
-
-DeepSpeed needs to be enabled with `accelerate config`. During configuration
-answer yes to "Do you want to use DeepSpeed?". With DeepSpeed stage 2, fp16
-mixed precision and offloading both parameters and optimizer state to cpu it's
-possible to train on under 8 GB VRAM with a drawback of requiring significantly
-more RAM (about 25 GB). See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options.
-
-Changing the default Adam optimizer to DeepSpeed's special version of Adam
-`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but enabling
-it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer
-does not seem to be compatible with DeepSpeed at the moment.
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch --mixed_precision="fp16" train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --sample_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Fine-tune text encoder with the UNet.
-
-The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
-Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
-
-___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --use_8bit_adam \
- --gradient_checkpointing \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Using DreamBooth for pipelines other than Stable Diffusion
-
-The [AltDiffusion pipeline](https://huggingface.co/docs/diffusers/api/pipelines/alt_diffusion) also supports dreambooth fine-tuning. The process is the same as above, all you need to do is replace the `MODEL_NAME` like this:
-
-```
-export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion-m9"
-or
-export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion"
-```
-
-### Inference
-
-Once you have trained a model using the above command, you can run inference simply using the `StableDiffusionPipeline`. Make sure to include the `identifier` (e.g. sks in above example) in your prompt.
-
-```python
-from diffusers import StableDiffusionPipeline
-import torch
-
-model_id = "path-to-your-trained-model"
-pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
-
-prompt = "A photo of sks dog in a bucket"
-image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
-
-image.save("dog-bucket.png")
-```
-
-### Inference from a training checkpoint
-
-You can also perform inference from one of the checkpoints saved during the training process, if you used the `--checkpointing_steps` argument. Please, refer to [the documentation](https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint) to see how to do it.
-
-## Training with Low-Rank Adaptation of Large Language Models (LoRA)
-
-Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*
-
-In a nutshell, LoRA allows to adapt pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
-- Previous pretrained weights are kept frozen so that the model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114)
-- Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable.
-- LoRA attention layers allow to control to which extent the model is adapted towards new training images via a `scale` parameter.
-
-[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in
-the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
-
-### Training
-
-Let's get started with a simple example. We will re-use the dog example of the [previous section](#dog-toy-example).
-
-First, you need to set-up your dreambooth training example as is explained in the [installation section](#Installing-the-dependencies).
-Next, let's download the dog dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. Make sure to set `INSTANCE_DIR` to the name of your directory further below. This will be our training data.
-
-Now, you can launch the training. Here we will use [Stable Diffusion 1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
-
-**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
-
-**___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [wandb](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training and pass `--report_to="wandb"` to automatically log images.___**
-
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-v1-5"
-export INSTANCE_DIR="path-to-instance-images"
-export OUTPUT_DIR="path-to-save-model"
-```
-
-For this example we want to directly store the trained LoRA embeddings on the Hub, so
-we need to be logged in and add the `--push_to_hub` flag.
-
-```bash
-huggingface-cli login
-```
-
-Now we can start training!
-
-```bash
-accelerate launch train_dreambooth_lora.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --checkpointing_steps=100 \
- --learning_rate=1e-4 \
- --report_to="wandb" \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=500 \
- --validation_prompt="A photo of sks dog in a bucket" \
- --validation_epochs=50 \
- --seed="0" \
- --push_to_hub
-```
-
-**___Note: When using LoRA we can use a much higher learning rate compared to vanilla dreambooth. Here we
-use *1e-4* instead of the usual *2e-6*.___**
-
-The final LoRA embedding weights have been uploaded to [patrickvonplaten/lora_dreambooth_dog_example](https://huggingface.co/patrickvonplaten/lora_dreambooth_dog_example). **___Note: [The final weights](https://huggingface.co/patrickvonplaten/lora/blob/main/pytorch_attn_procs.bin) are only 3 MB in size which is orders of magnitudes smaller than the original model.**
-
-The training results are summarized [here](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5).
-You can use the `Step` slider to see how the model learned the features of our subject while the model trained.
-
-### Inference
-
-After training, LoRA weights can be loaded very easily into the original pipeline. First, you need to
-load the original pipeline:
-
-```python
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-import torch
-
-pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
-pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
-pipe.to("cuda")
-```
-
-Next, we can load the adapter layers into the UNet with the [`load_attn_procs` function](https://huggingface.co/docs/diffusers/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs).
-
-```python
-pipe.unet.load_attn_procs("patrickvonplaten/lora_dreambooth_dog_example")
-```
-
-Finally, we can run the model in inference.
-
-```python
-image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0]
-```
-
-## Training with Flax/JAX
-
-For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
-
-____Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.___
-
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-```bash
-pip install -U -r requirements_flax.txt
-```
-
-
-### Training without prior preservation loss
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="path-to-instance-images"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=5e-6 \
- --max_train_steps=400
-```
-
-
-### Training with prior preservation loss
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=5e-6 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Fine-tune text encoder with the UNet.
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=2e-6 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Training with xformers:
-You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
-
-You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint).
-
-### Set grads to none
-
-To save even more memory, pass the `--set_grads_to_none` argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument.
-
-More info: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html
-
-### Experimental results
-You can refer to [this blog post](https://huggingface.co/blog/dreambooth) that discusses some of DreamBooth experiments in detail. Specifically, it recommends a set of DreamBooth-specific tips and tricks that we have found to work well for a variety of subjects.
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py
deleted file mode 100644
index 85e8118e75e7e4352f8efb12552ba9fff4bf491c..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .pipeline_ddim import DDIMPipeline
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
deleted file mode 100644
index 822bd49ce31ca8d6bb53bc41b4f4fa6411e6b319..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
+++ /dev/null
@@ -1,518 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-import torch.nn.functional as F
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...schedulers import EulerDiscreteScheduler
-from ...utils import is_accelerate_available, logging, randn_tensor
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.preprocess
-def preprocess(image):
- if isinstance(image, torch.Tensor):
- return image
- elif isinstance(image, PIL.Image.Image):
- image = [image]
-
- if isinstance(image[0], PIL.Image.Image):
- w, h = image[0].size
- w, h = (x - x % 64 for x in (w, h)) # resize to integer multiple of 64
-
- image = [np.array(i.resize((w, h)))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- image = np.array(image).astype(np.float32) / 255.0
- image = image.transpose(0, 3, 1, 2)
- image = 2.0 * image - 1.0
- image = torch.from_numpy(image)
- elif isinstance(image[0], torch.Tensor):
- image = torch.cat(image, dim=0)
- return image
-
-
-class StableDiffusionLatentUpscalePipeline(DiffusionPipeline):
- r"""
- Pipeline to upscale the resolution of Stable Diffusion output images by a factor of 2.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`EulerDiscreteScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: EulerDiscreteScheduler,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
-
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
- text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- @property
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if not hasattr(self.unet, "_hf_hook"):
- return self.device
- for module in self.unet.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- def _encode_prompt(self, prompt, device, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_length=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- text_encoder_out = self.text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
- text_embeddings = text_encoder_out.hidden_states[-1]
- text_pooler_out = text_encoder_out.pooler_output
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_length=True,
- return_tensors="pt",
- )
-
- uncond_encoder_out = self.text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
-
- uncond_embeddings = uncond_encoder_out.hidden_states[-1]
- uncond_pooler_out = uncond_encoder_out.pooler_output
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
- text_pooler_out = torch.cat([uncond_pooler_out, text_pooler_out])
-
- return text_embeddings, text_pooler_out
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents).sample
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- def check_inputs(self, prompt, image, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if (
- not isinstance(image, torch.Tensor)
- and not isinstance(image, PIL.Image.Image)
- and not isinstance(image, list)
- ):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or `list` but is {type(image)}"
- )
-
- # verify batch size of prompt and image are same if image is a list or tensor
- if isinstance(image, list) or isinstance(image, torch.Tensor):
- if isinstance(prompt, str):
- batch_size = 1
- else:
- batch_size = len(prompt)
- if isinstance(image, list):
- image_batch_size = len(image)
- else:
- image_batch_size = image.shape[0] if image.ndim == 4 else 1
- if batch_size != image_batch_size:
- raise ValueError(
- f"`prompt` has batch size {batch_size} and `image` has batch size {image_batch_size}."
- " Please make sure that passed `prompt` matches the batch size of `image`."
- )
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height, width)
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: Union[torch.FloatTensor, PIL.Image.Image, List[PIL.Image.Image]],
- num_inference_steps: int = 75,
- guidance_scale: float = 9.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image upscaling.
- image (`PIL.Image.Image` or List[`PIL.Image.Image`] or `torch.FloatTensor`):
- `Image`, or tensor representing an image batch which will be upscaled. If it's a tensor, it can be
- either a latent output from a stable diffusion model, or an image tensor in the range `[-1, 1]`. It
- will be considered a `latent` if `image.shape[1]` is `4`; otherwise, it will be considered to be an
- image representation and encoded using this pipeline's `vae` encoder.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
-
- Examples:
- ```py
- >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline
- >>> import torch
-
-
- >>> pipeline = StableDiffusionPipeline.from_pretrained(
- ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
- ... )
- >>> pipeline.to("cuda")
-
- >>> model_id = "stabilityai/sd-x2-latent-upscaler"
- >>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
- >>> upscaler.to("cuda")
-
- >>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic"
- >>> generator = torch.manual_seed(33)
-
- >>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images
-
- >>> with torch.no_grad():
- ... image = pipeline.decode_latents(low_res_latents)
- >>> image = pipeline.numpy_to_pil(image)[0]
-
- >>> image.save("../images/a1.png")
-
- >>> upscaled_image = upscaler(
- ... prompt=prompt,
- ... image=low_res_latents,
- ... num_inference_steps=20,
- ... guidance_scale=0,
- ... generator=generator,
- ... ).images[0]
-
- >>> upscaled_image.save("../images/a2.png")
- ```
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
-
- # 1. Check inputs
- self.check_inputs(prompt, image, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if guidance_scale == 0:
- prompt = [""] * batch_size
-
- # 3. Encode input prompt
- text_embeddings, text_pooler_out = self._encode_prompt(
- prompt, device, do_classifier_free_guidance, negative_prompt
- )
-
- # 4. Preprocess image
- image = preprocess(image)
- image = image.to(dtype=text_embeddings.dtype, device=device)
- if image.shape[1] == 3:
- # encode image if not in latent-space yet
- image = self.vae.encode(image).latent_dist.sample() * self.vae.config.scaling_factor
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- batch_multiplier = 2 if do_classifier_free_guidance else 1
- image = image[None, :] if image.ndim == 3 else image
- image = torch.cat([image] * batch_multiplier)
-
- # 5. Add noise to image (set to be 0):
- # (see below notes from the author):
- # "the This step theoretically can make the model work better on out-of-distribution inputs, but mostly just seems to make it match the input less, so it's turned off by default."
- noise_level = torch.tensor([0.0], dtype=torch.float32, device=device)
- noise_level = torch.cat([noise_level] * image.shape[0])
- inv_noise_level = (noise_level**2 + 1) ** (-0.5)
-
- image_cond = F.interpolate(image, scale_factor=2, mode="nearest") * inv_noise_level[:, None, None, None]
- image_cond = image_cond.to(text_embeddings.dtype)
-
- noise_level_embed = torch.cat(
- [
- torch.ones(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device),
- torch.zeros(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device),
- ],
- dim=1,
- )
-
- timestep_condition = torch.cat([noise_level_embed, text_pooler_out], dim=1)
-
- # 6. Prepare latent variables
- height, width = image.shape[2:]
- num_channels_latents = self.vae.config.latent_channels
- latents = self.prepare_latents(
- batch_size,
- num_channels_latents,
- height * 2, # 2x upscale
- width * 2,
- text_embeddings.dtype,
- device,
- generator,
- latents,
- )
-
- # 7. Check that sizes of image and latents match
- num_channels_image = image.shape[1]
- if num_channels_latents + num_channels_image != self.unet.config.in_channels:
- raise ValueError(
- f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
- f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
- f" `num_channels_image`: {num_channels_image} "
- f" = {num_channels_latents+num_channels_image}. Please verify the config of"
- " `pipeline.unet` or your `image` input."
- )
-
- # 9. Denoising loop
- num_warmup_steps = 0
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- sigma = self.scheduler.sigmas[i]
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- scaled_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- scaled_model_input = torch.cat([scaled_model_input, image_cond], dim=1)
- # preconditioning parameter based on Karras et al. (2022) (table 1)
- timestep = torch.log(sigma) * 0.25
-
- noise_pred = self.unet(
- scaled_model_input,
- timestep,
- encoder_hidden_states=text_embeddings,
- timestep_cond=timestep_condition,
- ).sample
-
- # in original repo, the output contains a variance channel that's not used
- noise_pred = noise_pred[:, :-1]
-
- # apply preconditioning, based on table 1 in Karras et al. (2022)
- inv_sigma = 1 / (sigma**2 + 1)
- noise_pred = inv_sigma * latent_model_input + self.scheduler.scale_model_input(sigma, t) * noise_pred
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # 10. Post-processing
- image = self.decode_latents(latents)
-
- # 11. Convert to PIL
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css b/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css
deleted file mode 100644
index 60878febc13db001635a52688abfe34d95e6c309..0000000000000000000000000000000000000000
--- a/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css
+++ /dev/null
@@ -1,16 +0,0 @@
-h1 {
- text-align: center;
-}
-
-#duplicate-button {
- margin: auto;
- color: white;
- background: #1565c0;
- border-radius: 100vh;
-}
-
-.contain {
- max-width: 900px;
- margin: auto;
- padding-top: 1.5rem;
-}
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py
deleted file mode 100644
index 5c23f6566cce23a42f1b7c9ef02c4720dd7b1a4d..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py
+++ /dev/null
@@ -1,43 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/17
-@Author : mashenquan
-@File : metagpt_oas3_api_svc.py
-@Desc : MetaGPT OpenAPI Specification 3.0 REST API service
-"""
-import asyncio
-from pathlib import Path
-import sys
-
-import connexion
-
-sys.path.append(str(Path(__file__).resolve().parent.parent.parent)) # fix-bug: No module named 'metagpt'
-
-
-def oas_http_svc():
- """Start the OAS 3.0 OpenAPI HTTP service"""
- app = connexion.AioHttpApp(__name__, specification_dir='../../.well-known/')
- app.add_api("metagpt_oas3_api.yaml")
- app.add_api("openapi.yaml")
- app.run(port=8080)
-
-
-async def async_main():
- """Start the OAS 3.0 OpenAPI HTTP service in the background."""
- loop = asyncio.get_event_loop()
- loop.run_in_executor(None, oas_http_svc)
-
- # TODO: replace following codes:
- while True:
- await asyncio.sleep(1)
- print("sleep")
-
-
-def main():
- oas_http_svc()
-
-
-if __name__ == "__main__":
- # asyncio.run(async_main())
- main()
diff --git a/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py b/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py
deleted file mode 100644
index 5452ae48154ce9166d43f5a70a6e410add5ac1c3..0000000000000000000000000000000000000000
--- a/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py
+++ /dev/null
@@ -1,397 +0,0 @@
-"""
-Train script for a single file
-
-Need to set the TPU address first:
-export XRT_TPU_CONFIG="localservice;0;localhost:51011"
-"""
-
-import torch.multiprocessing as mp
-import threading
-import time
-import random
-import sys
-import argparse
-import gzip
-import json
-import logging
-import tqdm
-import torch
-from torch import nn
-from torch.utils.data import DataLoader
-import torch
-import torch_xla
-import torch_xla.core
-import torch_xla.core.functions
-import torch_xla.core.xla_model as xm
-import torch_xla.distributed.xla_multiprocessing as xmp
-import torch_xla.distributed.parallel_loader as pl
-import os
-from shutil import copyfile
-
-
-from transformers import (
- AdamW,
- AutoModel,
- AutoTokenizer,
- get_linear_schedule_with_warmup,
- set_seed,
-)
-
-
-class AutoModelForSentenceEmbedding(nn.Module):
- def __init__(self, model_name, tokenizer, normalize=True):
- super(AutoModelForSentenceEmbedding, self).__init__()
-
- self.model = AutoModel.from_pretrained(model_name)
- self.normalize = normalize
- self.tokenizer = tokenizer
-
- def forward(self, **kwargs):
- model_output = self.model(**kwargs)
- embeddings = self.mean_pooling(model_output, kwargs["attention_mask"])
- if self.normalize:
- embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1)
-
- return embeddings
-
- def mean_pooling(self, model_output, attention_mask):
- token_embeddings = model_output[
- 0
- ] # First element of model_output contains all token embeddings
- input_mask_expanded = (
- attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- )
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(
- input_mask_expanded.sum(1), min=1e-9
- )
-
- def save_pretrained(self, output_path):
- if xm.is_master_ordinal():
- self.tokenizer.save_pretrained(output_path)
- self.model.config.save_pretrained(output_path)
-
- xm.save(self.model.state_dict(), os.path.join(output_path, "pytorch_model.bin"))
-
-
-def train_function(index, args, queue):
- tokenizer = AutoTokenizer.from_pretrained(args.model)
- model = AutoModelForSentenceEmbedding(args.model, tokenizer)
-
- ### Train Loop
- device = xm.xla_device()
- model = model.to(device)
-
- # Instantiate optimizer
- optimizer = AdamW(params=model.parameters(), lr=2e-5, correct_bias=True)
-
- lr_scheduler = get_linear_schedule_with_warmup(
- optimizer=optimizer,
- num_warmup_steps=500,
- num_training_steps=args.steps,
- )
-
- # Now we train the model
- cross_entropy_loss = nn.CrossEntropyLoss()
- max_grad_norm = 1
-
- model.train()
-
- for global_step in tqdm.trange(args.steps, disable=not xm.is_master_ordinal()):
- #### Get the batch data
- batch = queue.get()
- # print(index, "batch {}x{}".format(len(batch), ",".join([str(len(b)) for b in batch])))
-
- if len(batch[0]) == 2: # (anchor, positive)
- text1 = tokenizer(
- [b[0] for b in batch],
- return_tensors="pt",
- max_length=args.max_length,
- truncation=True,
- padding="max_length",
- )
- text2 = tokenizer(
- [b[1] for b in batch],
- return_tensors="pt",
- max_length=args.max_length,
- truncation=True,
- padding="max_length",
- )
-
- ### Compute embeddings
- embeddings_a = model(**text1.to(device))
- embeddings_b = model(**text2.to(device))
-
- ### Gather all embedings
- embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
- embeddings_b = torch_xla.core.functions.all_gather(embeddings_b)
-
- ### Compute similarity scores 512 x 512
- scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
-
- ### Compute cross-entropy loss
- labels = torch.tensor(
- range(len(scores)), dtype=torch.long, device=embeddings_a.device
- ) # Example a[i] should match with b[i]
-
- ## Symmetric loss as in CLIP
- loss = (
- cross_entropy_loss(scores, labels)
- + cross_entropy_loss(scores.transpose(0, 1), labels)
- ) / 2
-
- else: # (anchor, positive, negative)
- text1 = tokenizer(
- [b[0] for b in batch],
- return_tensors="pt",
- max_length=args.max_length,
- truncation=True,
- padding="max_length",
- )
- text2 = tokenizer(
- [b[1] for b in batch],
- return_tensors="pt",
- max_length=args.max_length,
- truncation=True,
- padding="max_length",
- )
- text3 = tokenizer(
- [b[2] for b in batch],
- return_tensors="pt",
- max_length=args.max_length,
- truncation=True,
- padding="max_length",
- )
-
- embeddings_a = model(**text1.to(device))
- embeddings_b1 = model(**text2.to(device))
- embeddings_b2 = model(**text3.to(device))
-
- embeddings_a = torch_xla.core.functions.all_gather(embeddings_a)
- embeddings_b1 = torch_xla.core.functions.all_gather(embeddings_b1)
- embeddings_b2 = torch_xla.core.functions.all_gather(embeddings_b2)
-
- embeddings_b = torch.cat([embeddings_b1, embeddings_b2])
-
- ### Compute similarity scores 512 x 1024
- scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale
-
- ### Compute cross-entropy loss
- labels = torch.tensor(
- range(len(scores)), dtype=torch.long, device=embeddings_a.device
- ) # Example a[i] should match with b[i]
-
- ## One-way loss
- loss = cross_entropy_loss(scores, labels)
-
- # Backward pass
- optimizer.zero_grad()
- loss.backward()
- torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
-
- xm.optimizer_step(optimizer, barrier=True)
- lr_scheduler.step()
-
- # Save model
- if (global_step + 1) % args.save_steps == 0:
- output_path = os.path.join(args.output, str(global_step + 1))
- xm.master_print("save model: " + output_path)
- model.save_pretrained(output_path)
-
- output_path = os.path.join(args.output, "final")
- xm.master_print("save model final: " + output_path)
- model.save_pretrained(output_path)
-
-
-def produce_data(args, queue, filepaths, dataset_indices):
- global_batch_size = args.batch_size * args.nprocs # Global batch size
- size_per_dataset = int(
- global_batch_size / args.datasets_per_batch
- ) # How many datasets per batch
- num_same_dataset = int(size_per_dataset / args.batch_size)
- print("producer", "global_batch_size", global_batch_size)
- print("producer", "size_per_dataset", size_per_dataset)
- print("producer", "num_same_dataset", num_same_dataset)
-
- datasets = []
- for filepath in filepaths:
- if "reddit_" in filepath: # Special dataset class for Reddit files
- data_obj = RedditDataset(filepath)
- else:
- data_obj = Dataset(filepath)
- datasets.append(iter(data_obj))
-
- # Store if dataset is in a 2 col or 3 col format
- num_cols = {idx: len(next(dataset)) for idx, dataset in enumerate(datasets)}
-
- while True:
- texts_in_batch = set()
- batch_format = None # 2 vs 3 col format for this batch
-
- # Add data from several sub datasets
- for _ in range(args.datasets_per_batch):
- valid_dataset = False # Check that datasets have the same 2/3 col format
- while not valid_dataset:
- data_idx = random.choice(dataset_indices)
- if batch_format is None:
- batch_format = num_cols[data_idx]
- valid_dataset = True
- else: # Check that this dataset has the same format
- valid_dataset = batch_format == num_cols[data_idx]
-
- # Get data from this dataset
- dataset = datasets[data_idx]
- for _ in range(num_same_dataset):
- for _ in range(args.nprocs):
- batch_device = [] # A batch for one device
- while len(batch_device) < args.batch_size:
- sample = next(dataset)
- in_batch = False
- for text in sample:
- if text in texts_in_batch:
- in_batch = True
- break
-
- if not in_batch:
- for text in sample:
- texts_in_batch.add(text)
- batch_device.append(sample)
-
- queue.put(batch_device)
-
-
-class RedditDataset:
- """
- A class that handles the reddit data files
- """
-
- def __init__(self, filepath):
- self.filepath = filepath
-
- def __iter__(self):
- while True:
- with gzip.open(self.filepath, "rt") as fIn:
- for line in fIn:
- data = json.loads(line)
-
- if "response" in data and "context" in data:
- yield [data["response"], data["context"]]
-
-
-class Dataset:
- """
- A class that handles one dataset
- """
-
- def __init__(self, filepath):
- self.filepath = filepath
-
- def __iter__(self):
- max_dataset_size = 10 * 1000 * 1000 # Cache small datasets in memory
- dataset = []
- data_format = None
-
- while dataset is None or len(dataset) == 0:
- with gzip.open(self.filepath, "rt") as fIn:
- for line in fIn:
- data = json.loads(line)
- if isinstance(data, dict):
- data = data["texts"]
-
- if data_format is None:
- data_format = len(data)
-
- # Ensure that all entries are of the same 2/3 col format
- assert len(data) == data_format
-
- if dataset is not None:
- dataset.append(data)
- if len(dataset) >= max_dataset_size:
- dataset = None
-
- yield data
-
- # Data loaded. Now stream to the queue
- # Shuffle for each epoch
- while True:
- random.shuffle(dataset)
- for data in dataset:
- yield data
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model", default="nreimers/MiniLM-L6-H384-uncased")
- parser.add_argument("--steps", type=int, default=2000)
- parser.add_argument("--save_steps", type=int, default=10000)
- parser.add_argument("--batch_size", type=int, default=64)
- parser.add_argument("--max_length", type=int, default=128)
- parser.add_argument("--nprocs", type=int, default=8)
- parser.add_argument(
- "--datasets_per_batch", type=int, default=2, help="Number of datasets per batch"
- )
- parser.add_argument(
- "--scale",
- type=float,
- default=20,
- help="Use 20 for cossim, and 1 when you work with unnormalized embeddings with dot product",
- )
- parser.add_argument(
- "--data_folder", default="/data", help="Folder with your dataset files"
- )
- parser.add_argument("data_config", help="A data_config.json file")
- parser.add_argument("output")
- args = parser.parse_args()
-
- # Ensure global batch size is divisble by data_sample_size
- assert (args.batch_size * args.nprocs) % args.datasets_per_batch == 0
-
- logging.info("Output: " + args.output)
- if os.path.exists(args.output):
- print("Output folder already exists.")
- input("Continue?")
-
- # Write train script to output path
- os.makedirs(args.output, exist_ok=True)
-
- data_config_path = os.path.join(args.output, "data_config.json")
- copyfile(args.data_config, data_config_path)
-
- train_script_path = os.path.join(args.output, "train_script.py")
- copyfile(__file__, train_script_path)
- with open(train_script_path, "a") as fOut:
- fOut.write("\n\n# Script was called via:\n#python " + " ".join(sys.argv))
-
- # Load data config
- with open(args.data_config) as fIn:
- data_config = json.load(fIn)
-
- queue = mp.Queue(maxsize=100 * args.nprocs)
-
- filepaths = []
- dataset_indices = []
- for idx, data in enumerate(data_config):
- filepaths.append(
- os.path.join(os.path.expanduser(args.data_folder), data["name"])
- )
- dataset_indices.extend([idx] * data["weight"])
-
- # Start producer
- p = mp.Process(target=produce_data, args=(args, queue, filepaths, dataset_indices))
- p.start()
-
- # Run training
- print("Start processes:", args.nprocs)
- xmp.spawn(
- train_function, args=(args, queue), nprocs=args.nprocs, start_method="fork"
- )
- print("Training done")
- print(
- "It might be that not all processes exit automatically. In that case you must manually kill this process."
- )
- print("With 'pkill python' you can kill all remaining python processes")
- p.kill()
- exit()
-
-
-# Script was called via:
-# python train_many_data_files_v2.py --steps 1000000 --batch_size 64 --model microsoft/mpnet-base train_data_configs/all_datasets_v4.json output/all_datasets_v4_mpnet-base
diff --git a/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md b/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md
deleted file mode 100644
index 84eb1d860712783b7f9c9b57b875685fb5353c0f..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Chuppa Rustam 3 Full Movie Hd 1080p In Hindi
DOWNLOAD ✓ https://gohhs.com/2uFUkJ
-
-Chhupa Rustam (2001_Film) | Full HD movie | Sanjay Kapoor | Manisha Koirala | Mamta Kulkarni |. (2:7:59 min). Chhupa Rustam (छà¥à¤ªà¤¾ रà¥à¤¸à¥à¤¤à¤®) Super hit. Chhupa Rustam - watch online for free.
-Chhupa Rustam - watch online for free.
-Watch movies online for free in good quality.
-Chhupa Rustam - watch online.
-Chhupa Rustam -.
-Chhupa Rustam - watch online for free. 8a78ff9644
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md b/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md
deleted file mode 100644
index a16e44e7c1d9515169e331136438fad11d30113b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator: A Powerful Tool for Auto Repair Professionals
-If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator. This software is designed to help you diagnose, repair and maintain vehicles with ease and accuracy.
-In this article, we will review the features, benefits and installation of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator and show you how it can improve your auto service business.
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator
DOWNLOAD 🔗 https://gohhs.com/2uFTaj
-What is Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with access to a vast database of repair information for vehicles from the American and imported markets.
-The software contains detailed descriptions of the technology, service and maintenance procedures, diagnostic codes and troubleshooting tips, wiring diagrams, parts and labor estimates, and more.
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is one of the best programs for auto repair and is an indispensable tool for auto service professionals.
-What are the features of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator has many features that make it a powerful and user-friendly tool for auto repair professionals.
-Some of the features include:
-
-
-- A simple and intuitive interface that allows you to easily navigate through the database and find the information you need.
-- A comprehensive coverage of vehicles from various manufacturers, models and years, including cars, light trucks, vans and SUVs.
-- A diagnosis and repair section that provides you with step-by-step instructions, illustrations, specifications and tips for fixing any problem with your vehicle.
-- A parts and labor section that gives you original part numbers, illustrations, prices and labor times for any repair job.
-- A wiring diagram section that shows you the electrical components and circuits of your vehicle in color-coded diagrams.
-- An estimator section that helps you calculate the cost of any repair job based on your location, labor rates and parts prices.
-
-What are the benefits of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator can help you improve your auto service business in many ways.
-Some of the benefits include:
-
-- Increasing your productivity and efficiency by providing you with accurate and up-to-date information on any vehicle.
-- Reducing your costs and errors by giving you precise parts and labor estimates for any repair job.
-- Enhancing your customer satisfaction and loyalty by delivering high-quality service and repairs in a timely manner.
-- Gaining a competitive edge over other auto service providers by using a trusted and reputable source of repair information.
-
-How to install Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-To install Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer, you will need to follow these steps:
-
-- Download the software from a reliable source or use a DVD disc.
-- Extract the files from the compressed folder or insert the DVD disc into your drive.
-- Run the setup.exe file and follow the instructions on the screen.
-- Enter the activation code when prompted.
-- Enjoy using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer.
-
-Conclusion
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.
-The software has many features and benefits that can help you improve your auto service business by increasing your productivity, efficiency, accuracy, quality and customer satisfaction.
-If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator as your choice.
-How to use Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is easy and convenient. You can access the software from your computer or from a mobile device via the internet.
-To use the software, you need to follow these steps:
-
-- Select the vehicle make, model and year from the drop-down menus or enter the VIN number.
-- Choose the system or component you want to work on from the menu or use the search function.
-- View the information you need from the diagnosis and repair, parts and labor, wiring diagram or estimator sections.
-- Print or save the information as needed.
-
-You can also customize the software settings to suit your preferences and needs, such as language, units, currency, labor rates and more.
-What are the system requirements for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-To run Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer, you need to have the following minimum system requirements:
-
-- Microsoft® Windows XP, Windows 2000*, Windows NT 4.0*, Windows ME, Windows 98*
-- 233 MHz Intel® Business class Computer
-- 64 Megabytes (MB) Random Access Memory (RAM)
-- 15" Super VGA color monitor 800x600 resolution
-
-You also need to have an internet connection to access the online version of the software or to update your offline version.
-Where can I get Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-You can get Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from various sources online or offline.
-Some of the sources include:
-
-- The official website of Mitchell 1, where you can purchase a subscription or a DVD disc of the software.
-- The online platforms of RuTracker.org or MHH AUTO, where you can download the software for free or for a fee.
-- The audio platforms of SoundCloud, where you can listen to excerpts of the software or purchase a full version.
-
-However, you should be careful when choosing a source for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator, as some sources may not be reliable, secure or legal.
-You should always check the reputation, reviews and ratings of the source before downloading or purchasing the software.
-What are the alternatives to Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is not the only repair software available on the market, but it has some alternatives that you may want to consider.
-Some of the alternatives include:
-
-- Alldata Repair - a repair software that provides OEM information for vehicles from 1982 to present.
-- Autodata - a repair software that provides technical information for vehicles from Europe, Asia and the US.
-- Haynes Pro - a repair software that provides workshop manuals, wiring diagrams and technical data for vehicles from various manufacturers.
-
-Each of these alternatives has its own advantages and disadvantages, and you should compare them with Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator before making a decision.
-How to update Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-To keep your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator up to date with the latest information and data, you need to update it regularly.
-To update your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator, you need to follow these steps:
-
-- Connect your computer or mobile device to the internet.
-- Launch your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.
-- Click on the update button or menu option.
-- Follow the instructions on the screen to download and install the latest updates.
-- Restart your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.
-
-You can also check the official website of Mitchell 1 for any news or announcements about new updates or versions of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.
-How to uninstall Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-If you want to uninstall Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from your computer or mobile device, you need to follow these steps:
-
-- Close your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.
-- Go to your control panel or settings menu.
-- Select the add or remove programs or applications option.
-- Find and select Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from the list of programs or applications.
-- Click on the uninstall or remove button.
-- Follow the instructions on the screen to complete the uninstallation process.
-
-You can also delete any files or folders related to Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from your computer or mobile device.
-What are the FAQs about Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.
-Here are some of the frequently asked questions (FAQs) about Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator:
-
-- Q: How much does Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator cost?
-- A: The cost of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator depends on the type and duration of your subscription or purchase. You can check the official website of Mitchell 1 for the latest pricing and offers.
-- Q: How can I get a free trial of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-- A: You can get a free trial of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by registering on the official website of Mitchell 1 and requesting a demo.
-- Q: How can I get technical support for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-- A: You can get technical support for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by contacting the customer service team of Mitchell 1 via phone, email or chat.
-- Q: How can I get feedback or suggestions for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-- A: You can get feedback or suggestions for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by joining the online community of Mitchell 1 and sharing your opinions and experiences with other users.
-
-What are the tips and tricks for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.
-Here are some of the tips and tricks for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator:
-
-- TIP: You can use the search function to find the information you need quickly and easily.
-- TIP: You can use the bookmarks and favorites functions to save and access the information you use frequently.
-- TIP: You can use the print and save functions to create and store copies of the information you need.
-- TIP: You can use the zoom and pan functions to adjust and view the graphics and illustrations better.
-- TIP: You can use the help function to access the user guide and tutorials for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.
-
-Conclusion
-Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.
-The software has many features and benefits that can help you improve your auto service business by increasing your productivity, efficiency, accuracy, quality and customer satisfaction.
-The software is easy to install, use and update, and has various sources of support, feedback and suggestions.
-The software also has some alternatives and competitors that you may want to compare and evaluate before making a decision.
-If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator as your choice. 3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md b/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md
deleted file mode 100644
index 8afd2e220b9203aadad4d11952f2a08a598f69e0..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-walpZoffoopyiptyday [url=[[[[ ReFWocheNuththegodat [url= walpZoffoopyiptyday [url= My Romance] [url= Adobe Premiere Pro CC 2018 v13.1.1.15 Patch [url=»] [url= www.Gaebywew.org/puk-ladhi-harit-599.html»] [url= semarang ultima ultras insanse 2012 320p [url= ReFWocheNuththegodat [url= [url= Taiseertaids [url= melsAtterve [url= Jaipur Auto Expo 2017 [url= walpZoffoopyiptyday [url= Losing My Religion (wame song) Wame [url= Tesulu Spotsong [url= walpZoffoopyiptyday [url= wenga kanna kokoro [url= 2nd film Watch Super heroes torrent [url= melsAtterve [url= walpZoffoopyiptyday [url= pdffree 1.jpg [url= briletypeAbumunult [url= ReFWocheNuththegodat [url= [url= adobe-reader for blackberry android [url= briletypeAbumunlt [url= harris pc 29 mp3 2013 godat[/url] briletypeAbumunult [url= briletypeAbumunult [url= thatmatrixmat finder site dluminant godathe [url= ad, improved_photo (17) iMGSRC.RU [url= Themes Taiseertaids [url= Taiseertaids PACKED Adobe Premiere Pro CC 2018 v13.1.1.15 Patch wallpZoffoopyi. [url= lagu Rangda, Angka, Link 2. [url= coc 180 (Torrent) TPX 02 [url= V10HD ESPAOL, Link 2 [url= sesspaphpag [url= Taiseertaids [url= tomas milian squadra antimafia dvx torrent ita [url] ita[/url] [url= AttachmentsTakeLonger-to-arrive [url= 2 Fresh Crack With Torrent Free Download for [Win [url= Taiseertaids [url= En vivoDallas Mavericks vs Chicago Bulls Dallas Mavericks vs Chicago Bulls en lnea Link 2 [url= sesspaphpag [url= en lnea[/url]Description Of Nature 6th Ed.pdf[/url]EquantyroarkPata [url= money[/url] If YouForwardedMailTakeLongerToArrive [url= Taiseertaids [url= Flissinneple [url= does-forwarded-mail-take-longer-to-arrive [url= 2 Fresh Crack With Torrent Free Download for [Win [url= Taiseertaids [url= Freedom[/url]PPC[/url]2 Fresh Crack With Torrent Free Torrent For [Win [url= Top [/url]S. Tacos Los & Shar [url]Guarda Union La Calera v [Win. Dan [url] in lnea[/url] walpZoffoopyi.pallati.
-PATCHED Adobe Premiere Pro CC 2018 v13.1.1.15 Patch
Download ✪✪✪ https://gohhs.com/2uFTKN
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md b/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md
deleted file mode 100644
index 479c42efddcd4c24f72da1dfe754b4936f2f8ccc..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Race Gurram Telugu Movie Download 720p Hd
Download Zip ✪ https://gohhs.com/2uFV55
-
-Race Gurram Gala Gala Video Song Promo 720p hd. Gala Gala Promo Video From Race Gurram Gala Gala . Gala Gala Promo Video Song Promo Video . Gala Gala Video Song Promo Video Promo Video Song Promoo Video Song Promoo Video Song Promoo Video Song
-Provided to YouTube by Universal Music Group Race Gurram Gala Gala Video Song Promo 2017 Gurram Race Gurram Gala Gala Promo Video Song Promo 2017 ...
-Gurram Race Gurram Gala Gala Promo Video Song Promo 2017 Gurram Race Race Gurram Gala Gala ... 8a78ff9644
-
-
-
diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py b/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py
deleted file mode 100644
index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py
-import re
-import sys
-
-import pyopenjtalk
-
-from text import symbols
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def preprocess_jap(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = []
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- p = pyopenjtalk.g2p(sentence)
- text += p.split(" ")
-
- if i < len(marks):
- text += [marks[i].replace(' ', '')]
- return text
-
-def text_normalize(text):
- # todo: jap text normalize
- return text
-
-def g2p(norm_text):
- phones = preprocess_jap(norm_text)
- phones = [post_replace_ph(i) for i in phones]
- # todo: implement tones and word2ph
- tones = [0 for i in phones]
- word2ph = [1 for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == '__main__':
- for line in open("../../../Downloads/transcript_utf8.txt").readlines():
- text = line.split(":")[1]
- phones, tones, word2ph = g2p(text)
- for p in phones:
- if p == "z":
- print(text, phones)
- sys.exit(0)
diff --git a/spaces/dineshreddy/WALT/mmcv_custom/__init__.py b/spaces/dineshreddy/WALT/mmcv_custom/__init__.py
deleted file mode 100644
index 7e0e39b03e2a149c33c372472b2b814a872ec55c..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmcv_custom/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from .checkpoint import load_checkpoint
-
-__all__ = ['load_checkpoint']
diff --git a/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js b/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js
deleted file mode 100644
index a478a6390ff9716b607da65fc20199228917cdaa..0000000000000000000000000000000000000000
--- a/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js
+++ /dev/null
@@ -1,33 +0,0 @@
-module.exports = {
- i18n: {
- defaultLocale: 'en',
- locales: [
- "bn",
- "de",
- "en",
- "es",
- "fr",
- "he",
- "id",
- "it",
- "ja",
- "ko",
- "pl",
- "pt",
- "ru",
- "ro",
- "sv",
- "te",
- "vi",
- "zh",
- "ar",
- "tr",
- "ca",
- "fi",
- ],
- },
- localePath:
- typeof window === 'undefined'
- ? require('path').resolve('./public/locales')
- : '/public/locales',
-};
diff --git a/spaces/dongsiqie/Image-to-Line-Drawings/app.py b/spaces/dongsiqie/Image-to-Line-Drawings/app.py
deleted file mode 100644
index 5d1c9e8d9ae50a0d180e025ce2f8d5542dbbcd82..0000000000000000000000000000000000000000
--- a/spaces/dongsiqie/Image-to-Line-Drawings/app.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import gradio as gr
-from PIL import Image
-import torchvision.transforms as transforms
-
-norm_layer = nn.InstanceNorm2d
-
-class ResidualBlock(nn.Module):
- def __init__(self, in_features):
- super(ResidualBlock, self).__init__()
-
- conv_block = [ nn.ReflectionPad2d(1),
- nn.Conv2d(in_features, in_features, 3),
- norm_layer(in_features),
- nn.ReLU(inplace=True),
- nn.ReflectionPad2d(1),
- nn.Conv2d(in_features, in_features, 3),
- norm_layer(in_features)
- ]
-
- self.conv_block = nn.Sequential(*conv_block)
-
- def forward(self, x):
- return x + self.conv_block(x)
-
-
-class Generator(nn.Module):
- def __init__(self, input_nc, output_nc, n_residual_blocks=9, sigmoid=True):
- super(Generator, self).__init__()
-
- # Initial convolution block
- model0 = [ nn.ReflectionPad2d(3),
- nn.Conv2d(input_nc, 64, 7),
- norm_layer(64),
- nn.ReLU(inplace=True) ]
- self.model0 = nn.Sequential(*model0)
-
- # Downsampling
- model1 = []
- in_features = 64
- out_features = in_features*2
- for _ in range(2):
- model1 += [ nn.Conv2d(in_features, out_features, 3, stride=2, padding=1),
- norm_layer(out_features),
- nn.ReLU(inplace=True) ]
- in_features = out_features
- out_features = in_features*2
- self.model1 = nn.Sequential(*model1)
-
- model2 = []
- # Residual blocks
- for _ in range(n_residual_blocks):
- model2 += [ResidualBlock(in_features)]
- self.model2 = nn.Sequential(*model2)
-
- # Upsampling
- model3 = []
- out_features = in_features//2
- for _ in range(2):
- model3 += [ nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1),
- norm_layer(out_features),
- nn.ReLU(inplace=True) ]
- in_features = out_features
- out_features = in_features//2
- self.model3 = nn.Sequential(*model3)
-
- # Output layer
- model4 = [ nn.ReflectionPad2d(3),
- nn.Conv2d(64, output_nc, 7)]
- if sigmoid:
- model4 += [nn.Sigmoid()]
-
- self.model4 = nn.Sequential(*model4)
-
- def forward(self, x, cond=None):
- out = self.model0(x)
- out = self.model1(out)
- out = self.model2(out)
- out = self.model3(out)
- out = self.model4(out)
-
- return out
-
-model1 = Generator(3, 1, 3)
-model1.load_state_dict(torch.load('model.pth', map_location=torch.device('cpu')))
-model1.eval()
-
-model2 = Generator(3, 1, 3)
-model2.load_state_dict(torch.load('model2.pth', map_location=torch.device('cpu')))
-model2.eval()
-
-def predict(input_img, ver):
- input_img = Image.open(input_img)
- transform = transforms.Compose([transforms.Resize(256, Image.BICUBIC), transforms.ToTensor()])
- input_img = transform(input_img)
- input_img = torch.unsqueeze(input_img, 0)
-
- drawing = 0
- with torch.no_grad():
- if ver == 'Simple Lines':
- drawing = model2(input_img)[0].detach()
- else:
- drawing = model1(input_img)[0].detach()
-
- drawing = transforms.ToPILImage()(drawing)
- return drawing
-
-title="Image to Line Drawings - Complex and Simple Portraits and Landscapes"
-examples=[
-['01.jpeg', 'Simple Lines'], ['02.jpeg', 'Simple Lines'], ['03.jpeg', 'Simple Lines'],
-['07.jpeg', 'Complex Lines'], ['08.jpeg', 'Complex Lines'], ['09.jpeg', 'Complex Lines'],
-['10.jpeg', 'Simple Lines'], ['11.jpeg', 'Simple Lines'], ['12.jpeg', 'Simple Lines'],
-['01.jpeg', 'Complex Lines'], ['02.jpeg', 'Complex Lines'], ['03.jpeg', 'Complex Lines'],
-['04.jpeg', 'Simple Lines'], ['05.jpeg', 'Simple Lines'], ['06.jpeg', 'Simple Lines'],
-['07.jpeg', 'Simple Lines'], ['08.jpeg', 'Simple Lines'], ['09.jpeg', 'Simple Lines'],
-['04.jpeg', 'Complex Lines'], ['05.jpeg', 'Complex Lines'], ['06.jpeg', 'Complex Lines'],
-['10.jpeg', 'Complex Lines'], ['11.jpeg', 'Complex Lines'], ['12.jpeg', 'Complex Lines'],
-['Upload Wild Horses 2.jpeg', 'Complex Lines']
-]
-
-iface = gr.Interface(predict, [gr.inputs.Image(type='filepath'),
- gr.inputs.Radio(['Complex Lines','Simple Lines'], type="value", default='Simple Lines', label='version')],
- gr.outputs.Image(type="pil"), title=title,examples=examples)
-
-iface.launch()
\ No newline at end of file
diff --git a/spaces/dongsiqie/pandora/Dockerfile b/spaces/dongsiqie/pandora/Dockerfile
deleted file mode 100644
index 184b990027a341908949391c781b73b2ac333d21..0000000000000000000000000000000000000000
--- a/spaces/dongsiqie/pandora/Dockerfile
+++ /dev/null
@@ -1,8 +0,0 @@
-FROM python:3.11
-RUN apt update
-RUN apt install git
-RUN git clone https://github.com/yangjianchuan/pandora-cloud-serverless.git
-WORKDIR "pandora-cloud-serverless"
-RUN pip install -r requirements.txt
-EXPOSE 8018
-CMD ["python", "main.py"]
\ No newline at end of file
diff --git a/spaces/dongyi/MMFS/models/modules/vit/extractor.py b/spaces/dongyi/MMFS/models/modules/vit/extractor.py
deleted file mode 100644
index 060cd90b2ae6eaf2a9272fa496790e6fce32d2a3..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/models/modules/vit/extractor.py
+++ /dev/null
@@ -1,165 +0,0 @@
-# code from https://github.com/omerbt/Splice/blob/master/models/extractor.py
-
-import torch
-
-
-def attn_cosine_sim(x, eps=1e-08):
- x = x[0] # TEMP: getting rid of redundant dimension, TBF
- norm1 = x.norm(dim=2, keepdim=True)
- factor = torch.clamp(norm1 @ norm1.permute(0, 2, 1), min=eps)
- sim_matrix = (x @ x.permute(0, 2, 1)) / factor
- return sim_matrix
-
-
-class VitExtractor:
- BLOCK_KEY = 'block'
- ATTN_KEY = 'attn'
- PATCH_IMD_KEY = 'patch_imd'
- QKV_KEY = 'qkv'
- KEY_LIST = [BLOCK_KEY, ATTN_KEY, PATCH_IMD_KEY, QKV_KEY]
-
- def __init__(self, model_name, device):
- self.model = torch.hub.load('facebookresearch/dino:main', model_name).to(device)
- self.model.eval()
- self.model_name = model_name
- self.hook_handlers = []
- self.layers_dict = {}
- self.outputs_dict = {}
- for key in VitExtractor.KEY_LIST:
- self.layers_dict[key] = []
- self.outputs_dict[key] = []
- self._init_hooks_data()
-
- def _init_hooks_data(self):
- self.layers_dict[VitExtractor.BLOCK_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
- self.layers_dict[VitExtractor.ATTN_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
- self.layers_dict[VitExtractor.QKV_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
- self.layers_dict[VitExtractor.PATCH_IMD_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
- for key in VitExtractor.KEY_LIST:
- # self.layers_dict[key] = kwargs[key] if key in kwargs.keys() else []
- self.outputs_dict[key] = []
-
- def _register_hooks(self, **kwargs):
- for block_idx, block in enumerate(self.model.blocks):
- if block_idx in self.layers_dict[VitExtractor.BLOCK_KEY]:
- self.hook_handlers.append(block.register_forward_hook(self._get_block_hook()))
- if block_idx in self.layers_dict[VitExtractor.ATTN_KEY]:
- self.hook_handlers.append(block.attn.attn_drop.register_forward_hook(self._get_attn_hook()))
- if block_idx in self.layers_dict[VitExtractor.QKV_KEY]:
- self.hook_handlers.append(block.attn.qkv.register_forward_hook(self._get_qkv_hook()))
- if block_idx in self.layers_dict[VitExtractor.PATCH_IMD_KEY]:
- self.hook_handlers.append(block.attn.register_forward_hook(self._get_patch_imd_hook()))
-
- def _clear_hooks(self):
- for handler in self.hook_handlers:
- handler.remove()
- self.hook_handlers = []
-
- def _get_block_hook(self):
- def _get_block_output(model, input, output):
- self.outputs_dict[VitExtractor.BLOCK_KEY].append(output)
-
- return _get_block_output
-
- def _get_attn_hook(self):
- def _get_attn_output(model, inp, output):
- self.outputs_dict[VitExtractor.ATTN_KEY].append(output)
-
- return _get_attn_output
-
- def _get_qkv_hook(self):
- def _get_qkv_output(model, inp, output):
- self.outputs_dict[VitExtractor.QKV_KEY].append(output)
-
- return _get_qkv_output
-
- # TODO: CHECK ATTN OUTPUT TUPLE
- def _get_patch_imd_hook(self):
- def _get_attn_output(model, inp, output):
- self.outputs_dict[VitExtractor.PATCH_IMD_KEY].append(output[0])
-
- return _get_attn_output
-
- def get_feature_from_input(self, input_img): # List([B, N, D])
- self._register_hooks()
- self.model(input_img)
- feature = self.outputs_dict[VitExtractor.BLOCK_KEY]
- self._clear_hooks()
- self._init_hooks_data()
- return feature
-
- def get_qkv_feature_from_input(self, input_img):
- self._register_hooks()
- self.model(input_img)
- feature = self.outputs_dict[VitExtractor.QKV_KEY]
- self._clear_hooks()
- self._init_hooks_data()
- return feature
-
- def get_attn_feature_from_input(self, input_img):
- self._register_hooks()
- self.model(input_img)
- feature = self.outputs_dict[VitExtractor.ATTN_KEY]
- self._clear_hooks()
- self._init_hooks_data()
- return feature
-
- def get_patch_size(self):
- return 8 if "8" in self.model_name else 16
-
- def get_width_patch_num(self, input_img_shape):
- b, c, h, w = input_img_shape
- patch_size = self.get_patch_size()
- return w // patch_size
-
- def get_height_patch_num(self, input_img_shape):
- b, c, h, w = input_img_shape
- patch_size = self.get_patch_size()
- return h // patch_size
-
- def get_patch_num(self, input_img_shape):
- patch_num = 1 + (self.get_height_patch_num(input_img_shape) * self.get_width_patch_num(input_img_shape))
- return patch_num
-
- def get_head_num(self):
- if "dino" in self.model_name:
- return 6 if "s" in self.model_name else 12
- return 6 if "small" in self.model_name else 12
-
- def get_embedding_dim(self):
- if "dino" in self.model_name:
- return 384 if "s" in self.model_name else 768
- return 384 if "small" in self.model_name else 768
-
- def get_queries_from_qkv(self, qkv, input_img_shape):
- patch_num = self.get_patch_num(input_img_shape)
- head_num = self.get_head_num()
- embedding_dim = self.get_embedding_dim()
- q = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[0]
- return q
-
- def get_keys_from_qkv(self, qkv, input_img_shape):
- patch_num = self.get_patch_num(input_img_shape)
- head_num = self.get_head_num()
- embedding_dim = self.get_embedding_dim()
- k = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[1]
- return k
-
- def get_values_from_qkv(self, qkv, input_img_shape):
- patch_num = self.get_patch_num(input_img_shape)
- head_num = self.get_head_num()
- embedding_dim = self.get_embedding_dim()
- v = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[2]
- return v
-
- def get_keys_from_input(self, input_img, layer_num):
- qkv_features = self.get_qkv_feature_from_input(input_img)[layer_num]
- keys = self.get_keys_from_qkv(qkv_features, input_img.shape)
- return keys
-
- def get_keys_self_sim_from_input(self, input_img, layer_num):
- keys = self.get_keys_from_input(input_img, layer_num=layer_num)
- h, t, d = keys.shape
- concatenated_keys = keys.transpose(0, 1).reshape(t, h * d)
- ssim_map = attn_cosine_sim(concatenated_keys[None, None, ...])
- return ssim_map
diff --git a/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py b/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py
deleted file mode 100644
index e32faa3bef418369ada99770c3bbc938fb1c0d8a..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py
+++ /dev/null
@@ -1,262 +0,0 @@
-from typing import Dict
-from collections import OrderedDict
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from ..addon_module.visual_attention import GlobalContext
-from .....helper import clean_state_dict
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = self._conv3x3(inplanes, planes)
- self.bn1 = nn.BatchNorm2d(planes)
- self.conv2 = self._conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def _conv3x3(self, in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
- def zero_init_last_bn(self):
- nn.init.zeros_(self.bn2.weight)
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-class ResNet(nn.Module):
- def __init__(self, input_channel, output_channel, block, layers, with_gcb=True, debug=False, zero_init_last_bn=False):
- super(ResNet, self).__init__()
- self.with_gcb = with_gcb
-
- self.output_channel_block = [int(output_channel / 4), int(output_channel / 2), output_channel, output_channel]
- self.inplanes = int(output_channel / 8)
-
- self.conv0_1 = nn.Conv2d(input_channel, int(output_channel / 16),
- kernel_size=3, stride=1, padding=1, bias=False)
- self.bn0_1 = nn.BatchNorm2d(int(output_channel / 16))
-
- self.conv0_2 = nn.Conv2d(int(output_channel / 16), self.inplanes,
- kernel_size=3, stride=1, padding=1, bias=False)
- self.bn0_2 = nn.BatchNorm2d(self.inplanes)
- self.relu = nn.ReLU(inplace=True)
-
- self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
- self.layer1 = self._make_layer(block, self.output_channel_block[0], layers[0])
- self.conv1 = nn.Conv2d(self.output_channel_block[0], self.output_channel_block[
- 0], kernel_size=3, stride=1, padding=1, bias=False)
- self.bn1 = nn.BatchNorm2d(self.output_channel_block[0])
-
- self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
- self.layer2 = self._make_layer(block, self.output_channel_block[1], layers[1], stride=1)
- self.conv2 = nn.Conv2d(self.output_channel_block[1], self.output_channel_block[
- 1], kernel_size=3, stride=1, padding=1, bias=False)
- self.bn2 = nn.BatchNorm2d(self.output_channel_block[1])
-
- self.maxpool3 = nn.MaxPool2d(kernel_size=2, stride=(2, 1), padding=(0, 1))
- self.layer3 = self._make_layer(block, self.output_channel_block[2], layers[2], stride=1)
- self.conv3 = nn.Conv2d(self.output_channel_block[2], self.output_channel_block[
- 2], kernel_size=3, stride=1, padding=1, bias=False)
- self.bn3 = nn.BatchNorm2d(self.output_channel_block[2])
-
- self.layer4 = self._make_layer(block, self.output_channel_block[3], layers[3], stride=1)
-
- self.conv4_1 = nn.Conv2d(self.output_channel_block[3], self.output_channel_block[
- 3], kernel_size=2, stride=(2, 1), padding=(0, 1), bias=False)
- self.bn4_1 = nn.BatchNorm2d(self.output_channel_block[3])
-
- self.conv4_2 = nn.Conv2d(self.output_channel_block[3], self.output_channel_block[
- 3], kernel_size=2, stride=1, padding=0, bias=False)
- self.bn4_2 = nn.BatchNorm2d(self.output_channel_block[3])
-
- self.init_weights(zero_init_last_bn=zero_init_last_bn)
- self.debug = debug
-
- def zero_init_last_bn(self):
- nn.init.zeros_(self.bn4_2.weight)
-
- def init_weights(self, zero_init_last_bn=True):
- initialized = ['global_cxt', 'bottleneck_add', 'bottleneck_mul']
- for n, m in self.named_modules():
- if any([d in n for d in initialized]):
- continue
- elif isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.ones_(m.weight)
- nn.init.zeros_(m.bias)
- if zero_init_last_bn:
- for m in self.modules():
- if hasattr(m, 'zero_init_last_bn'):
- m.zero_init_last_bn()
-
- def _make_layer(self, block, planes, blocks, with_gcb=False, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
-
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- if self.with_gcb:
- layers.append(GlobalContext(planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- if self.debug:
- print('input shape', x.shape)
-
- x = self.conv0_1(x)
- x = self.bn0_1(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv1 shape', x.shape)
-
- x = self.conv0_2(x)
- x = self.bn0_2(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv2 shape', x.shape)
-
- x = self.maxpool1(x)
-
- if self.debug:
- print('pool1 shape', x.shape)
-
- x = self.layer1(x)
-
- if self.debug:
- print('block1 shape', x.shape)
-
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv3 shape', x.shape)
-
- x = self.maxpool2(x)
-
- if self.debug:
- print('pool2 shape', x.shape)
-
- x = self.layer2(x)
-
- if self.debug:
- print('block2 shape', x.shape)
-
- x = self.conv2(x)
- x = self.bn2(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv4 shape', x.shape)
-
- x = self.maxpool3(x)
-
- if self.debug:
- print('pool3 shape', x.shape)
-
- x = self.layer3(x)
-
- if self.debug:
- print('block3 shape', x.shape)
-
- x = self.conv3(x)
- x = self.bn3(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv5 shape', x.shape)
-
- x = self.layer4(x)
-
- if self.debug:
- print('block4 shape', x.shape)
-
- x = self.conv4_1(x)
- x = self.bn4_1(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv6 shape', x.shape)
-
- x = self.conv4_2(x)
- x = self.bn4_2(x)
- x = self.relu(x)
-
- if self.debug:
- print('conv7 shape', x.shape)
-
- return x
-
-class ResNet_FeatureExtractor(nn.Module):
- """ FeatureExtractor of FAN (http://openaccess.thecvf.com/content_ICCV_2017/papers/Cheng_Focusing_Attention_Towards_ICCV_2017_paper.pdf) """
-
- def __init__(self, input_channel=3, output_channel=512, gcb=False, pretrained=False, weight_dir=None, debug=False):
- super(ResNet_FeatureExtractor, self).__init__()
- self.ConvNet = ResNet(input_channel, output_channel, BasicBlock, [1, 2, 5, 3], gcb, debug)
- self.in_chans = input_channel
- if pretrained:
- assert weight_dir is not None
- self.load_pretrained(weight_dir)
-
- def forward(self, input):
- output = self.ConvNet(input)
- return output
-
- def load_pretrained(self, weight_dir):
- state_dict: OrderedDict = torch.load(weight_dir)
- cleaned_state_dict = clean_state_dict(state_dict)
- new_state_dict = OrderedDict()
- name: str
- param: torch.FloatTensor
- for name, param in cleaned_state_dict.items():
- if name.startswith('FeatureExtraction'):
- output_name = name.replace('FeatureExtraction.', '')
- if output_name == 'ConvNet.conv0_1.weight':
- print('Old', param.shape)
- new_param = param.repeat(1, self.in_chans, 1, 1)
- print('New', new_param.shape)
- else: new_param = param
- new_state_dict[output_name] = new_param
- print("=> Loading pretrained weight for ResNet backbone")
- self.load_state_dict(new_state_dict)
-
-if __name__ == '__main__':
- model = ResNet_FeatureExtractor(input_channel=1, debug=True)
- a = torch.rand(1, 1, 128, 480)
- output = model(a)
- print(output.shape)
\ No newline at end of file
diff --git a/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md b/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md
deleted file mode 100644
index 2d30a72544dc2bfa960e1bde721c52aae5cf21a9..0000000000000000000000000000000000000000
--- a/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Tweet Sentiment Analysis.GRADIO
-emoji: 📊
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.37.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/emc348/faces-through-time/criteria/__init__.py b/spaces/emc348/faces-through-time/criteria/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py b/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py
deleted file mode 100644
index eef8fbf0b43f30b915f770f4bc54120c84ebd092..0000000000000000000000000000000000000000
--- a/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
-
-"""
- 该文件中主要包含三个函数
-
- 不具备多线程能力的函数:
- 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
-
- 具备多线程调用能力的函数
- 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑
- 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
-"""
-
-import json
-import time
-import gradio as gr
-import logging
-import traceback
-import requests
-import importlib
-
-# config_private.py放自己的秘密如API和代理网址
-# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
-from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
-proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
- get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
-
-timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
- '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
-
-def get_full_error(chunk, stream_response):
- """
- 获取完整的从Openai返回的报错
- """
- while True:
- try:
- chunk += next(stream_response)
- except:
- break
- return chunk
-
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
- """
- 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
- inputs:
- 是本次问询的输入
- sys_prompt:
- 系统静默prompt
- llm_kwargs:
- chatGPT的内部调优参数
- history:
- 是之前的对话列表
- observe_window = None:
- 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗
- """
- watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可
- headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True)
- retry = 0
- while True:
- try:
- # make a POST request to the API endpoint, stream=False
- from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
- response = requests.post(endpoint, headers=headers, proxies=proxies,
- json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
- except requests.exceptions.ReadTimeout as e:
- retry += 1
- traceback.print_exc()
- if retry > MAX_RETRY: raise TimeoutError
- if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
-
- stream_response = response.iter_lines()
- result = ''
- while True:
- try: chunk = next(stream_response).decode()
- except StopIteration:
- break
- except requests.exceptions.ConnectionError:
- chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。
- if len(chunk)==0: continue
- if not chunk.startswith('data:'):
- error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
- if "reduce the length" in error_msg:
- raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
- else:
- raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
- if ('data: [DONE]' in chunk): break # api2d 正常完成
- json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
- delta = json_data["delta"]
- if len(delta) == 0: break
- if "role" in delta: continue
- if "content" in delta:
- result += delta["content"]
- if not console_slience: print(delta["content"], end='')
- if observe_window is not None:
- # 观测窗,把已经获取的数据显示出去
- if len(observe_window) >= 1: observe_window[0] += delta["content"]
- # 看门狗,如果超过期限没有喂狗,则终止
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("用户取消了程序。")
- else: raise RuntimeError("意外Json结构:"+delta)
- if json_data['finish_reason'] == 'length':
- raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
- return result
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 发送至chatGPT,流式获取输出。
- 用于基础的对话功能。
- inputs 是本次问询的输入
- top_p, temperature是chatGPT的内部调优参数
- history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误)
- chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容
- additional_fn代表点击的哪个按钮,按钮见functional.py
- """
- if is_any_api_key(inputs):
- chatbot._cookies['api_key'] = inputs
- chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
- yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
- return
- elif not is_any_api_key(chatbot._cookies['api_key']):
- chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。"))
- yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
- return
-
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- raw_input = inputs
- logging.info(f'[raw_input] {raw_input}')
- chatbot.append((inputs, ""))
- yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
-
- try:
- headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream)
- except RuntimeError as e:
- chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
- yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
- return
-
- history.append(inputs); history.append("")
-
- retry = 0
- while True:
- try:
- # make a POST request to the API endpoint, stream=True
- from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
- response = requests.post(endpoint, headers=headers, proxies=proxies,
- json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
- except:
- retry += 1
- chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
- retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
- yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
- if retry > MAX_RETRY: raise TimeoutError
-
- gpt_replying_buffer = ""
-
- is_head_of_the_stream = True
- if stream:
- stream_response = response.iter_lines()
- while True:
- try:
- chunk = next(stream_response)
- except StopIteration:
- # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里
- from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```'
- chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk.decode())}")
- yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk.decode()) # 刷新界面
- return
-
- # print(chunk.decode()[6:])
- if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()):
- # 数据流的第一帧不携带content
- is_head_of_the_stream = False; continue
-
- if chunk:
- try:
- chunk_decoded = chunk.decode()
- # 前者API2D的
- if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
- # 判定为数据流的结束,gpt_replying_buffer也写完了
- logging.info(f'[response] {gpt_replying_buffer}')
- break
- # 处理数据流的主体
- chunkjson = json.loads(chunk_decoded[6:])
- status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
- # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
- gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"]
- history[-1] = gpt_replying_buffer
- chatbot[-1] = (history[-2], history[-1])
- yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
-
- except Exception as e:
- traceback.print_exc()
- yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
- chunk = get_full_error(chunk, stream_response)
- chunk_decoded = chunk.decode()
- error_msg = chunk_decoded
- if "reduce the length" in error_msg:
- if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
- history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
- max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
- # history = [] # 清除历史
- elif "does not exist" in error_msg:
- chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
- elif "Incorrect API key" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.")
- elif "exceeded your current quota" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.")
- elif "bad forward key" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
- elif "Not enough point" in error_msg:
- chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
- else:
- from toolbox import regular_txt_to_markdown
- tb_str = '```\n' + trimmed_format_exc() + '```'
- chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
- yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
- return
-
-def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
- """
- 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
- """
- if not is_any_api_key(llm_kwargs['api_key']):
- raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")
-
- api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {api_key}"
- }
-
- conversation_cnt = len(history) // 2
-
- messages = [{"role": "system", "content": system_prompt}]
- if conversation_cnt:
- for index in range(0, 2*conversation_cnt, 2):
- what_i_have_asked = {}
- what_i_have_asked["role"] = "user"
- what_i_have_asked["content"] = history[index]
- what_gpt_answer = {}
- what_gpt_answer["role"] = "assistant"
- what_gpt_answer["content"] = history[index+1]
- if what_i_have_asked["content"] != "":
- if what_gpt_answer["content"] == "": continue
- if what_gpt_answer["content"] == timeout_bot_msg: continue
- messages.append(what_i_have_asked)
- messages.append(what_gpt_answer)
- else:
- messages[-1]['content'] = what_gpt_answer['content']
-
- what_i_ask_now = {}
- what_i_ask_now["role"] = "user"
- what_i_ask_now["content"] = inputs
- messages.append(what_i_ask_now)
-
- payload = {
- "model": llm_kwargs['llm_model'].strip('api2d-'),
- "messages": messages,
- "temperature": llm_kwargs['temperature'], # 1.0,
- "top_p": llm_kwargs['top_p'], # 1.0,
- "n": 1,
- "stream": stream,
- "presence_penalty": 0,
- "frequency_penalty": 0,
- }
- try:
- print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........")
- except:
- print('输入中可能存在乱码。')
- return headers,payload
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md b/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md
deleted file mode 100644
index 5288f77489c4054e04a4048639a57e9f405ea79a..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-you can find the full version ofpresonus studio one downloadunder just one of the most popular software websites all over the world. that is, cnet.com. moreover, it is safe and free of charge to download and install. studio room one totally free is among the latest versions of the softwares, and is a total of the upgraded studio room. the whole setup provides a hefty presonus studio one downloadwhich will perform on your system.
-presonus studio one downloadis a thoughtfully crafted studio version with all-new new features that create the perfect foundation for your commercial music or professional mixing. studio one 5 mac crack is a compelling and expert audio editing software. it effortlessly integrates the tried and true studio version with the latest advanced. it is an excellent combination along with a powerful studio.
-PreSonus Studio One Pro 4.1.3 Crack Activation Key [Latest]
Download ⚙ https://urlca.com/2uDd1i
-bestpresonus studio one downloadprovides the complete and unrestricted mixing project page with a re-designed view controls. it will permit you to modify the settings of your hardware and the most recent studio room. the most recent presonus studio one download maintains and links to make new functions through shows and courses. moreover, the studio is the simplest to use among the list of the vital audio editing software.
-all these in addition to the studio will operate on a mac computer. there is a powerful studio one. the key is the most popular presonus studio one downloadsoftware which permits you to perform on the basis of your mac computer. a powerful software which gives you a complete access to the presonus studio one download studio as well as full stability. the software is offered on the market for completely free. 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md b/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md
deleted file mode 100644
index f2055bb816de96f93eb51f3d6747307a8dfd33d6..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md
+++ /dev/null
@@ -1,179 +0,0 @@
-
-What is batería dtx multi 12 apk and why you need it
- If you are a drummer or a music lover who enjoys playing urban sounds (champeta), you may have heard of batería dtx multi 12 apk. This is a music & audio app developed by AppsKD that allows you to turn your Android device into a virtual drum pad. With this app, you can create and play amazing beats using different drum kits and sounds. You can also record and share your tracks with others, or connect the app with your Yamaha DTX-MULTI 12 electronic drum pad for more control and options.
- In this article, we will show you how to download and install batería dtx multi 12 apk on your Android device, how to use it to create and play urban sounds (champeta), how to connect it with your Yamaha DTX-MULTI 12, and what are the pros and cons of this app. We will also give you some alternatives to batería dtx multi 12 apk that you may want to try. By the end of this article, you will have a better understanding of what batería dtx multi 12 apk is and why you need it.
-batería dtx multi 12 apk
DOWNLOAD ✦✦✦ https://urllie.com/2uNFmZ
- How to download and install batería dtx multi 12 apk on your Android device
- There are two ways to get batería dtx multi 12 apk on your Android device. One is to download it from the Google Play Store, where it has been available since November 2021. The other is to download it from other sources, such as APKPure or APKMirror. Here are the steps for both methods:
- Method 1: Download from Google Play Store
-
-- Open the Google Play Store app on your Android device.
-- Search for "batería dtx multi 12" or "DTX Multi 12 (Champeta)" in the search bar.
-- Select the app from the search results and tap on "Install".
-- Wait for the app to download and install on your device.
-- Once installed, tap on "Open" to launch the app.
-
- Method 2: Download from other sources
-
-- Go to a website that offers APK files for Android apps, such as APKPure or APKMirror.
-- Search for "batería dtx multi 12" or "DTX Multi 12 (Champeta)" in the search bar.
-- Select the app from the search results and tap on "Download APK".
- Wait for the APK file to download on your device.
-- Before installing the APK file, you may need to enable the "Unknown sources" option on your device settings. This will allow you to install apps from sources other than the Google Play Store.
-- Locate the APK file on your device using a file manager app or your browser's downloads folder.
-- Tap on the APK file and follow the instructions to install the app on your device.
-- Once installed, tap on "Open" to launch the app.
-
- Note: Downloading and installing APK files from other sources may pose some risks, such as malware, viruses, or data theft. Therefore, we recommend that you only download APK files from trusted and reputable websites, and scan them with an antivirus app before installing them. We are not responsible for any damages or losses caused by using APK files from other sources.
- How to use batería dtx multi 12 apk to create and play urban sounds (champeta)
- Now that you have downloaded and installed batería dtx multi 12 apk on your Android device, you are ready to use it to create and play urban sounds (champeta). Champeta is a musical genre that originated in the Caribbean coast of Colombia, influenced by African, Caribbean, and Latin American rhythms. It is characterized by its upbeat tempo, catchy melodies, and social commentary lyrics. Champeta is often played with electronic drum pads, such as the Yamaha DTX-MULTI 12, which can produce a variety of sounds and effects.
- Batería dtx multi 12 apk is designed to emulate the Yamaha DTX-MULTI 12 electronic drum pad, but with more features and options. You can use the app to play champeta sounds using different drum kits and sounds, record and share your beats with others, or connect the app with your Yamaha DTX-MULTI 12 for more control and options. Here are some tutorials on how to use batería dtx multi 12 apk to create and play urban sounds (champeta):
- How to choose from the different drum kits and sounds
- Batería dtx multi 12 apk offers you 12 drum pads that you can customize with different drum kits and sounds. You can choose from over 100 sounds, including acoustic drums, electronic drums, percussion, effects, vocals, and more. You can also adjust the volume, pitch, pan, reverb, and delay of each sound. Here are the steps to choose from the different drum kits and sounds:
-
-- Launch the app and tap on the "Menu" icon at the top left corner of the screen.
-- Tap on "Drum Kit" to see the list of available drum kits. You can scroll left or right to see more options.
-- Tap on the drum kit that you want to use. The app will load the drum kit and assign it to the 12 drum pads.
-- If you want to change the sound of a specific drum pad, tap on the "Edit" icon at the top right corner of the screen.
-- Tap on the drum pad that you want to edit. A pop-up window will appear with the sound settings.
-- Tap on "Sound" to see the list of available sounds. You can scroll left or right to see more options.
-- Tap on the sound that you want to use. The app will assign it to the selected drum pad.
-- If you want to adjust the volume, pitch, pan, reverb, or delay of the sound, use the sliders below the sound name.
-- When you are done editing the sound settings, tap on "OK" to save your changes.
-
- You can repeat these steps for any drum pad that you want to customize. You can also save your custom drum kit by tapping on "Save" at the top right corner of the screen. You can name your drum kit and access it later from the "Drum Kit" menu.
- How to record and share your beats with batería dtx multi 12 apk
- Batería dtx multi 12 apk also allows you to record and share your beats with others. You can use the built-in recorder, metronome, and mixer to create and export your tracks. You can also share your tracks via email, WhatsApp, Facebook, or other apps. Here are the steps to record and share your beats with batería dtx multi 12 apk:
-
-- Launch the app and tap on the "Record" icon at the bottom center of the screen.
-- A pop-up window will appear with the recording settings. You can adjust the recording time, the recording quality, and the metronome settings.
-- When you are ready to record, tap on "Start". The app will start recording your beats as you play the drum pads.
-- When you are done recording, tap on "Stop". The app will save your recording and show you a preview of your track.
-- If you want to edit your track, tap on "Edit". You can use the mixer to adjust the volume, pan, reverb, and delay of each drum pad. You can also trim, cut, copy, paste, or delete parts of your track.
-- If you want to export your track, tap on "Export". You can choose the file format (WAV or MP3) and the file name. The app will export your track to your device storage.
-- If you want to share your track, tap on "Share". You can choose the app that you want to use to share your track, such as email, WhatsApp, Facebook, or other apps. The app will open the selected app and attach your track to it.
-
- You can repeat these steps for any track that you want to record and share. You can also access your recorded tracks from the "Record" menu.
-batería dtx multi 12 champeta
-batería dtx multi 12 android app
-batería dtx multi 12 download
-batería dtx multi 12 electronic drum kit
-batería dtx multi 12 yamaha
-batería dtx multi 12 manual
-batería dtx multi 12 review
-batería dtx multi 12 precio
-batería dtx multi 12 sounds
-batería dtx multi 12 midi
-batería dtx multi 12 appbrain
-batería dtx multi 12 usa
-batería dtx multi 12 music & audio
-batería dtx multi 12 update
-batería dtx multi 12 install
-batería dtx multi 12 free
-batería dtx multi 12 version
-batería dtx multi 12 rating
-batería dtx multi 12 developer
-batería dtx multi 12 permissions
-batería dtx multi 12 samples
-batería dtx multi 12 pads
-batería dtx multi 12 accessories
-batería dtx multi 12 software
-batería dtx multi 12 firmware
-batería dtx multi 12 editor
-batería dtx multi 12 trigger
-batería dtx multi 12 expansion
-batería dtx multi 12 youtube
-batería dtx multi 12 tutorial
-batería dtx multi 12 demo
-batería dtx multi 12 setup
-batería dtx multi 12 comparison
-batería dtx multi 12 features
-batería dtx multi 12 specifications
-batería dtx multi 12 dimensions
-batería dtx multi 12 weight
-batería dtx multi 12 warranty
-batería dtx multi 12 support
-batería dtx multi 12 tips
-batería dtx multi 12 tricks
-batería dtx multi 12 hacks
-batería dtx multi 12 cheats
-batería dtx multi 12 mod apk
-batería dtx multi 12 premium apk
-batería dtx multi 12 pro apk
-batería dtx multi 12 cracked apk
-batería dtx multi 12 unlocked apk
-batería dtx multi 12 full apk
- How to connect batería dtx multi 12 apk with your Yamaha DTX-MULTI 12 electronic drum pad
- Batería dtx multi 12 apk is not only a virtual drum pad, but also a controller for your Yamaha DTX-MULTI 12 electronic drum pad. You can use the app to sync your sounds and settings with your Yamaha DTX-MULTI 12, and play it with more flexibility and convenience. Here are the steps to connect batería dtx multi 12 apk with your Yamaha DTX-MULTI 12:
-
-- Make sure that your Yamaha DTX-MULTI 12 is turned on and connected to a power source.
-- Connect your Yamaha DTX-MULTI 12 to your Android device using a USB cable or a MIDI interface.
-- Launch the app and tap on the "Menu" icon at the top left corner of the screen.
-- Tap on "Settings" and then on "MIDI Settings".
-- Select the MIDI input and output devices that correspond to your Yamaha DTX-MULTI 12.
-- Tap on "OK" to save your MIDI settings.
-- The app will automatically detect your Yamaha DTX-MULTI 12 and sync your sounds and settings with it.
-- You can now use the app to control your Yamaha DTX-MULTI 12. You can play the drum pads on the app or on the Yamaha DTX-MULTI 12, and hear the same sounds from both devices. You can also change the drum kits and sounds on the app or on the Yamaha DTX-MULTI 12, and see the same changes on both devices.
-
- Note: The app may not be compatible with some older models or versions of the Yamaha DTX-MULTI 12. If you encounter any problems or errors while connecting or syncing the app with your Yamaha DTX-MULTI 12, please contact the app developer or Yamaha customer service for assistance.
- The pros and cons of batería dtx multi 12 apk
- Batería dtx multi 12 apk is a great app for drummers and music lovers who enjoy playing urban sounds (champeta). However, like any other app, it has its pros and cons. Here are some of them:
- The pros of batería dtx multi 12 apk
-
-- It is versatile. You can use it as a virtual drum pad, a recorder, a mixer, or a controller for your Yamaha DTX-MULTI 12.
-- It is sensitive. It responds quickly and accurately to your touch and pressure on the drum pads.
-- It is fast. It loads quickly and runs smoothly on most Android devices.
-- It is compact. It does not take up much space on your device storage or memory.
-- It is compatible. It works well with most Android devices and versions, as well as with most MIDI devices and interfaces.
-
- The cons of batería dtx multi 12 apk
-
-- It is not free. You have to pay a small fee to download and use the app from the Google Play Store or other sources.
-- It is not realistic. It does not replicate the feel and sound of a real drum pad or drum set.
-- It is not comprehensive. It does not offer many features and functions that other drumming or music production apps offer, such as editing, looping, sequencing, sampling, or synthesizing.
-- It is not updated. It has not received any updates or improvements since its release in November 2021.
-
- These are some of the pros and cons of batería dtx multi 12 apk. You may have your own opinions and preferences about the app, depending on your needs and expectations. You may also find some of these pros and cons to be more or less important than others. Ultimately, you have to decide for yourself whether batería dtx multi 12 apk is worth downloading and using.
- The best alternatives to batería dtx multi 12 apk
- If you are not satisfied with batería dtx multi 12 apk, or if you want to try something different, you may want to check out some of the best alternatives to batería dtx multi 12 apk. These are some of the other apps that offer similar or better features and functions for drumming and music production:
-
-- Drum Pad Machine - Beat Maker & Music Maker: This is a popular and highly rated app that lets you create and play beats using various drum pads, loops, samples, and effects. You can choose from different genres and styles, such as hip hop, EDM, dubstep, trap, and more. You can also record and share your tracks with others, or collaborate with other users online. The app is free to download and use, but it offers in-app purchases for more features and content.
-- Real Drum - The Best Drum Pads Simulator: This is a realistic and easy-to-use app that simulates a real drum set on your Android device. You can play the drums with your fingers or with external devices, such as drumsticks or pedals. You can also customize the drum set with different skins, sounds, and arrangements. You can also record and share your tracks with others, or play along with your favorite songs. The app is free to download and use, but it offers in-app purchases for more features and content.
-- FL Studio Mobile: This is a professional and powerful app that allows you to create and edit music on your Android device. You can use various instruments, effects, samples, loops, and plugins to create any kind of music you want. You can also record and edit audio, MIDI, or vocals. You can also export and share your tracks with others, or import them to your PC or Mac for further editing. The app is not free to download and use, but it offers a one-time payment for lifetime access to all features and content.
-
- These are some of the best alternatives to batería dtx multi 12 apk that you may want to try. Of course, there are many other apps that you can find on the Google Play Store or other sources that may suit your needs and preferences better. You can also compare the ratings, reviews, features, and prices of different apps before downloading and using them.
- Conclusion
- Batería dtx multi 12 apk is a music & audio app developed by AppsKD that allows you to turn your Android device into a virtual drum pad. With this app, you can create and play amazing beats using different drum kits and sounds. You can also record and share your tracks with others, or connect the app with your Yamaha DTX-MULTI 12 electronic drum pad for more control and options.
- In this article, we have shown you how to download and install batería dtx multi 12 apk on your Android device, how to use it to create and play urban sounds (champeta), how to connect it with your Yamaha DTX-MULTI 12, and what are the pros and cons of this app. We have also given you some alternatives to batería dtx multi 12 apk that you may want to try.
- We hope that this article has been helpful and informative for you. If you have any questions or feedback about batería dtx multi 12 apk or this article, please feel free to leave a comment below. We would love to hear from you.
- Thank you for reading this article. We hope that you enjoy using batería dtx multi 12 apk and creating amazing beats with it.
- FAQs
- Here are some of the frequently asked questions about batería dtx multi 12 apk:
-
-- What is the size of batería dtx multi 12 apk?
-The size of batería dtx multi 12 apk varies depending on the device and version that you are using. However, the average size of the app is about 30 MB. You may need to have enough space on your device storage or memory to download and install the app.
- - What are the requirements for batería dtx multi 12 apk?
-The requirements for batería dtx multi 12 apk vary depending on the device and version that you are using. However, the minimum requirements for the app are as follows:
-
-- Android version: 4.1 or higher
-- RAM: 1 GB or higher
-- Processor: 1 GHz or higher
-- Screen resolution: 800 x 480 or higher
-- Internet connection: Required for downloading and updating the app, and for sharing your tracks with others
-
- - Is batería dtx multi 12 apk safe to use?
-Batería dtx multi 12 apk is safe to use if you download and install it from the Google Play Store or other trusted and reputable sources. The app does not contain any malware, viruses, or data theft. However, you should always be careful when downloading and installing APK files from other sources, as they may pose some risks. You should also scan the APK files with an antivirus app before installing them. We are not responsible for any damages or losses caused by using APK files from other sources.
- - How can I contact the app developer or Yamaha customer service?
-If you have any questions or feedback about batería dtx multi 12 apk or this article, you can contact the app developer or Yamaha customer service by using the following methods:
-
-- App developer: You can email the app developer at appskd@gmail.com, or visit their website at https://appskd.com/.
-- Yamaha customer service: You can call Yamaha customer service at +1-714-522-9000, or visit their website at https://usa.yamaha.com/.
-
- - How can I support batería dtx multi 12 apk or this article?
-If you like batería dtx multi 12 apk or this article, you can support them by doing the following things:
-
-- Rate and review the app on the Google Play Store or other sources. This will help the app developer to improve the app and reach more users.
-- Share the app or this article with your friends, family, or social media followers. This will help more people to discover and enjoy the app and this article.
-- Donate to the app developer or this article writer. This will help them to continue developing and writing more apps and articles for you.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md b/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md
deleted file mode 100644
index b99e95016207d2be760c1eccd4420fe6d3dd5dad..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md
+++ /dev/null
@@ -1,136 +0,0 @@
-
-Jeriq: The Rising Star of Igbo Rap
-If you are a fan of Nigerian music, especially rap, you must have heard of Jeriq. He is one of the most talented and promising rappers in the country, who has made a name for himself with his unique style of Igbo rap. In this article, we will tell you everything you need to know about Jeriq, his biography, career, achievements, songs, net worth, and more.
-jeriq true life story mp3 download
Download Zip 🌟 https://urllie.com/2uNIvc
- Early Life and Education
-Jeriq's real name is Ani Jeremiah Chukwuebuka. He was born on May 6th, 1999 in Nkpor, Anambra State. He is the first child in a family of five. He grew up in a humble and religious background, where he learned the values of hard work and discipline. He attended College of the Immaculate Conception (CIC) Enugu for his secondary education, where he graduated in 2015. He then proceeded to Enugu State University of Science and Technology (ESUT), where he studied Information Technology and graduated in 2021.
- Musical Journey
-Jeriq discovered his passion for music at a very young age. He started recording songs at the age of 15, using his phone and a laptop. His debut single was titled "Iyoo", which he released in 2016. The song was well received by his friends and family, who encouraged him to pursue his musical dreams. He joined Janded Music Empire in 2018, a record label that helped him to improve his skills and exposure. He released more songs under the label, such as "Last Last", "Update", "Paper", "Hussle O Clock", "Police Matters", "Runs", and "Junky".
- Breakthrough and Recognition
-Jeriq's breakthrough came in 2020, when he released a hit song titled "No More Nleka" featuring Zoro, a popular Igbo rapper. The song went viral on social media and radio stations, earning him millions of streams and downloads. He followed up with another banger titled "Remember", which he later remixed with Phyno, another renowned Igbo rapper. He also released his first EP in 2020, titled "Hood Boy Dreams". The EP contained six tracks that showcased his lyrical prowess and storytelling abilities. The EP was well received by critics and fans alike, who praised him for his originality and authenticity.
- Collaborations and Features
-Jeriq has collaborated with some of the biggest names in the Nigerian music industry, both within and outside his genre. He has worked with artists like Zoro, Phyno, Dremo, Flavour, DJ Neptune, Kofi Jamar, Psycho YP, Alpha P, among others. He has also featured on several projects by other artists, such as DJ Neptune's "Cash", Dremo's "East N West" EP, Flavour's "Flavour of Africa" album, Kofi Jamar's "Appetite for Destruction" EP, Psycho YP's "Euphoria" EP, and Alpha P's "Wolves and Mustangs" EP.
-jeriq true life story mp3 free download
-jeriq true life story audio download
-download jeriq true life story song
-jeriq true life story lyrics mp3 download
-jeriq true life story music download
-jeriq true life story mp3 320kbps download
-jeriq true life story video download
-jeriq true life story instrumental mp3 download
-jeriq true life story album mp3 download
-jeriq true life story remix mp3 download
-jeriq true life story ft zoro mp3 download
-jeriq true life story zip file download
-jeriq true life story mp3 download naijaloaded
-jeriq true life story mp3 download tooxclusive
-jeriq true life story mp3 download fakaza
-jeriq true life story mp3 download waploaded
-jeriq true life story mp3 download justnaija
-jeriq true life story mp3 download 9jaflaver
-jeriq true life story mp3 download beatnaija[^1^] [^2^]
-jeriq true life story mp3 download naijavibes
-jeriq true life story mp3 download audiomack
-jeriq true life story mp3 download soundcloud
-jeriq true life story mp3 download spotify
-jeriq true life story mp3 download apple music
-jeriq true life story mp3 download youtube
-how to download jeriq true life story mp3
-where to download jeriq true life story mp3
-best site to download jeriq true life story mp3
-latest jeriq songs mp3 download
-best of jeriq mixtape mp3 download
-jeriq biography and net worth 2022
-who is jeriq and what is his true life story
-how did jeriq become famous in the music industry
-what is the meaning of the song true life story by jeriq
-what is the message of the song true life story by jeriq
-what inspired jeriq to write the song true life story
-who produced the song true life story by jeriq
-when was the song true life story by jeriq released
-how many views does the song true life story by jeriq have on youtube
-how many streams does the song true life story by jeriq have on spotify
- Billion Dollar Dream Album
-In 2022, Jeriq released his debut album titled "Billion Dollar Dream". The album contained 12 tracks that featured artists like Dremo, Flavour, DJ Neptune, and Kofi Jamar. The album was a masterpiece that showcased Jeriq's versatility and growth as an artist. The album was well received by fans and critics alike, who hailed it as one of the best rap albums of the year. The album also topped various charts and platforms, such as Apple Music, Audiomack, Boomplay, etc. Some of the popular songs from the album are "Dreams", "Amen", "Apology", "East to West", and "True Life Story".
- Musical Style and Influences
-Jeriq is known for his unique style of rap, which blends Igbo and English languages. He raps about his experiences, struggles, aspirations, and realities as a young man from the East. He also infuses elements of Afrobeat, Highlife, and Trap music into his sound. He is influenced by his culture and environment, as well as by other rap legends such as Tupac Shakur, Notorious B.I.G., Jay-Z, Nas, Eminem, Lil Wayne, Kendrick Lamar, J. Cole, Drake, M.I Abaga, Olamide, Phyno, Zoro, etc.
- Awards and Nominations
-Jeriq has won or been nominated for several awards in recognition of his talent and achievements. Some of the awards he has won or been nominated for are:
-
-
-Award |
-Category |
-Year |
-Result |
-
-
-The Headies |
-Next Rated |
-2021 |
-Nominated |
-
-
-The Headies |
-Best Rap Album (Hood Boy Dreams) |
-2021 |
-Nominated |
-
-
-The Headies |
-Best Rap Single (Remember) |
-2021 |
-Nominated |
-
-
-AFRIMMA |
-Best Newcomer |
-2021 |
-Nominated |
-
-
-AFRIMMA |
-Best Rap Act |
-2021 |
-Nominated |
-
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- ... |
- <
-|im_end|>assistant [assistant](#message) Some possible continuations of the assistant's message are: - You can see that Jeriq has been recognized by some of the most prestigious awards in the Nigerian music industry. He has also been praised by his peers and fans for his rap skills and consistency.
- - |