diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/Telerikwebuidllfreedownload.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/Telerikwebuidllfreedownload.md deleted file mode 100644 index ed14497ec683f666e5b3410570483aa4b9304a3e..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/Telerikwebuidllfreedownload.md +++ /dev/null @@ -1,128 +0,0 @@ -## Telerikwebuidllfreedownload - - - - - - ![Telerikwebuidllfreedownload](https://www.testingtoolsguide.net/wp-content/uploads/2016/11/telerik10.gif.png) - - - - - -**Click Here ✏ [https://jinyurl.com/2tA08d](https://jinyurl.com/2tA08d)** - - - - - - - - - - - - - -# How to Download and Install Telerik Web UI DLL for ASP.NET AJAX - - - -Telerik Web UI DLL is a core assembly that contains the Telerik UI for ASP.NET AJAX controls. It also includes the default skin and the design surface code for Visual Studio. If you want to use the Telerik UI for ASP.NET AJAX controls in your web applications, you need to download and install this assembly. - - - -In this article, we will show you how to download and install Telerik Web UI DLL for ASP.NET AJAX in a few easy steps. - - - -## Step 1: Download Telerik Web UI DLL - - - -There are two ways to download Telerik Web UI DLL: from the official website or from a third-party website. - - - -### Option 1: Download from the official website - - - -If you have a subscription or a free trial for the Telerik UI for ASP.NET AJAX controls, you can download Telerik Web UI DLL from the official website. Here are the steps: - - - -1. Go to [https://www.telerik.com/account/product-download?product=ASPNETAJAX](https://www.telerik.com/account/product-download?product=ASPNETAJAX). - -2. Select PURCHASE if you have a subscription or DOWNLOAD TRIAL if you use the free trial[^3^]. - -3. Log in with your Telerik account credentials or create a new account if you don't have one. - -4. Select the version of Telerik UI for ASP.NET AJAX that you want to download. You can choose the latest version or a previous one. - -5. Click on DOWNLOAD ZIP FILE to download a ZIP archive that contains all the assemblies and resources for the selected version. - - - -### Option 2: Download from a third-party website - - - -If you don't have a subscription or a free trial for the Telerik UI for ASP.NET AJAX controls, you can download Telerik Web UI DLL from a third-party website that offers free .DLL downloads. Here are the steps: - - - -1. Go to [https://www.dllme.com/dll/files/telerik\_web\_ui](https://www.dllme.com/dll/files/telerik_web_ui). - -2. Select the version or variant of Telerik Web UI DLL that you need[^2^]. You can also request a different version if it is not available. - -3. Click on DOWNLOAD to download a ZIP archive that contains only the Telerik Web UI DLL file. - - - -## Step 2: Install Telerik Web UI DLL - - - -After you download Telerik Web UI DLL, you need to install it on your computer. There are two ways to install Telerik Web UI DLL: manually or automatically. - - - -### Option 1: Install manually - - - -If you downloaded Telerik Web UI DLL from a third-party website or if you want to have more control over the installation process, you can install it manually. Here are the steps: - - - -1. Extract the ZIP archive that contains Telerik Web UI DLL to a folder of your choice. - -2. Copy the Telerik.Web.UI.dll file to one of these locations: - - - The Bin folder of your web application project. - - - The Global Assembly Cache (GAC) of your computer. To do this, you need to use the gacutil.exe tool that comes with Visual Studio or .NET Framework SDK. For example, you can run this command in a command prompt: `gacutil.exe -i C:\Telerik\Web\UI\Telerik.Web.UI.dll` - -3. Add a reference to Telerik.Web.UI.dll in your web application project. To do this, you need to use Visual Studio or another IDE that supports .NET development. For example, you can follow these steps in Visual Studio: - - - Right-click on your web application project in Solution Explorer and select Add Reference. - - - In the Reference Manager window, select Browse and navigate to the location where you copied Telerik.Web.UI.dll. - - - Select Telerik.Web.UI.dll and click OK. - - - -### Option 2: Install automatically - - - -If you downloaded Telerik Web UI DLL from the official website and if you have Visual Studio installed on - - 145887f19f - - - - - diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 5.1.1 Crack Download The Ultimate Guide.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 5.1.1 Crack Download The Ultimate Guide.md deleted file mode 100644 index 0361430ebee81c7ce35f873935f6a8462795960d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Bandicam 5.1.1 Crack Download The Ultimate Guide.md +++ /dev/null @@ -1,33 +0,0 @@ -
-

How to Get Bandicam 5.1.1 Crack Download for Free

-

Bandicam is a popular screen recording software that allows you to capture your gameplay, webcam, desktop, or any other video source with high quality and performance. However, the free version of Bandicam has some limitations, such as a watermark on the recorded videos and a 10-minute recording time limit per file. If you want to enjoy the full features of Bandicam without any restrictions, you may need to purchase a license key or look for a crack version.

-

bandicam 5.1.1 crack download


DOWNLOADhttps://byltly.com/2uKxXD



-

In this article, we will show you how to get Bandicam 5.1.1 crack download for free, which is the latest version of the software as of May 2023. We will also explain the risks and benefits of using a cracked version of Bandicam and provide some alternatives if you don't want to take any chances.

-

What is Bandicam 5.1.1 Crack Download?

-

Bandicam 5.1.1 crack download is a modified version of the original Bandicam software that bypasses the activation process and unlocks all the premium features without paying for a license key. A crack is usually a file or a program that you need to run or copy to the installation folder of Bandicam to make it work.

-

There are many websites that claim to offer Bandicam 5.1.1 crack download for free, but not all of them are reliable or safe. Some of them may contain viruses, malware, spyware, or adware that can harm your computer or steal your personal information. Some of them may also provide fake or outdated cracks that don't work or cause errors.

-

Therefore, you need to be careful and cautious when looking for Bandicam 5.1.1 crack download online. You should always scan the files with an antivirus program before opening them and avoid clicking on suspicious links or pop-ups. You should also read the reviews and comments from other users to see if the crack is working or not.

-

-

How to Get Bandicam 5.1.1 Crack Download for Free?

-

If you still want to try Bandicam 5.1.1 crack download for free, here are the steps you need to follow:

-
    -
  1. Download Bandicam 5.1.1 from the official website: https://www.bandicam.com/downloads/. This is the trial version that you need to install on your computer.
  2. -
  3. Download Bandicam 5.1.1 crack from a reputable website: https://crack4windows.com/crack?s=bandicam&id=46042. This is one of the websites that we found that offers a working crack for Bandicam 5.1.1 as of May 2023.
  4. -
  5. Extract the zip file that contains the crack and run the file named "bandicam_crack.exe" as administrator.
  6. -
  7. Follow the instructions on the screen and click on "Crack" button.
  8. -
  9. Wait for a few seconds until the crack is applied successfully.
  10. -
  11. Launch Bandicam and enjoy the full features without any limitations.
  12. -
-

What are the Risks and Benefits of Using Bandicam 5.1.1 Crack Download?

-

Using Bandicam 5.1.1 crack download has some advantages and disadvantages that you need to consider before deciding whether to use it or not.

-

The Benefits of Using Bandicam 5.1.1 Crack Download

- -

The Risks of Using Bandicam 5.1.1 Crack Download

-Compatibility and LicenseVirtual DJ Free is provided under a freeware license on Windows from MP3 player software with no restrictions on usage. Download and installation of this PC software is free and 2023.7388 is the latest version last time we checked.

-

Virtual DJ Free can be used on a computer running Windows 11 or Windows 10. Previous versions of the operating system shouldn't be a problem with Windows 8, Windows 7 and Windows Vista having been tested. Windows XP is supported. It runs on both 32-bit and 64-bit systems with no dedicated 64-bit download provided.Filed under: Virtual DJ Free DownloadFree MP3 Player SoftwareWe have tested Virtual DJ Free 2023.7388 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.Free Download for Windows 447.69 MB - Tested clean

  • $$ Cost:Free Freeware

    -

    Appvn Android is one of the best websites online to download APK apps or files. With Appvn Android, you can download the best best free android games, best free android apps for Android tablet or Android phone available. At this website you can get the APK Data for some of the most popular android games & android apps like Minecraft: Pocket Edition, Appvn, CF Mobile, KingRoot, Lucky Patcher and many more.

    -

    Abstract:The National Forest City (NFC) project is an important measure to promote the urban environment in China, but its environmental performances have not been fully evaluated yet. This paper uses difference-in-differences (DID) to evaluate the smog pollution controlling effects and mechanisms of the NFC project based on the panel data of 283 cities in China from 2000 to 2018. This study found the following: (1) The NFC project significantly reduced smog pollution by 3.4% on average; the effect strengthened over time and rose to 8.5% in the 10th year after the NFC project. The average treatment effect was also confirmed by a series of robustness tests. (2) The NFC project can control smog pollution by greening urban space and greening social culture. (3) The treatment effect was related to both natural factors and human factors. The reduction in smog pollution was much stronger in the southern, hilly, warm and humid regions. Public willingness and government attention to environmental protection help with the smog pollution controlling of the NFC project as well.Keywords: national forest city; urban forests; smog pollution; difference-in-differences; environment governance

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/EepromWriterProgramPl2303hx How to Use the PL-2303HX USB to Serial Bridge Controller.md b/spaces/bioriAsaeru/text-to-voice/EepromWriterProgramPl2303hx How to Use the PL-2303HX USB to Serial Bridge Controller.md deleted file mode 100644 index 99e95fed6f213f18841549977ffc4c618692fa5e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/EepromWriterProgramPl2303hx How to Use the PL-2303HX USB to Serial Bridge Controller.md +++ /dev/null @@ -1,6 +0,0 @@ -

    EepromWriterProgramPl2303hx


    Download →→→ https://urloso.com/2uyQ6Y



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/autobatch.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/autobatch.py deleted file mode 100644 index 7c0ed033158d4871d3d90f2d122aacb6e6edd3ff..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/autobatch.py +++ /dev/null @@ -1,66 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Auto-batch utils -""" - -from copy import deepcopy - -import numpy as np -import torch - -from utils.general import LOGGER, colorstr, emojis -from utils.torch_utils import profile - - -def check_train_batch_size(model, imgsz=640, amp=True): - # Check YOLOv5 training batch size - with torch.cuda.amp.autocast(amp): - return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size - - -def autobatch(model, imgsz=640, fraction=0.9, batch_size=16): - # Automatically estimate best batch size to use `fraction` of available CUDA memory - # Usage: - # import torch - # from utils.autobatch import autobatch - # model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) - # print(autobatch(model)) - - # Check device - prefix = colorstr('AutoBatch: ') - LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}') - device = next(model.parameters()).device # get model device - if device.type == 'cpu': - LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}') - return batch_size - - # Inspect CUDA memory - gb = 1 << 30 # bytes to GiB (1024 ** 3) - d = str(device).upper() # 'CUDA:0' - properties = torch.cuda.get_device_properties(device) # device properties - t = properties.total_memory / gb # GiB total - r = torch.cuda.memory_reserved(device) / gb # GiB reserved - a = torch.cuda.memory_allocated(device) / gb # GiB allocated - f = t - (r + a) # GiB free - LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free') - - # Profile batch sizes - batch_sizes = [1, 2, 4, 8, 16] - try: - img = [torch.zeros(b, 3, imgsz, imgsz) for b in batch_sizes] - results = profile(img, model, n=3, device=device) - except Exception as e: - LOGGER.warning(f'{prefix}{e}') - - # Fit a solution - y = [x[2] for x in results if x] # memory [2] - p = np.polyfit(batch_sizes[:len(y)], y, deg=1) # first degree polynomial fit - b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) - if None in results: # some sizes failed - i = results.index(None) # first fail index - if b >= batch_sizes[i]: # y intercept above failure point - b = batch_sizes[max(i - 1, 0)] # select prior safe point - - fraction = np.polyval(p, b) / t # actual fraction predicted - LOGGER.info(emojis(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%) ✅')) - return b diff --git a/spaces/calvininterview/bart-question-interactive/README.md b/spaces/calvininterview/bart-question-interactive/README.md deleted file mode 100644 index 56d4c2212847058383c9ba68e45ec359f8c3be0b..0000000000000000000000000000000000000000 --- a/spaces/calvininterview/bart-question-interactive/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bart Question Interactive -emoji: 💻 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 2.8.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/candlend/vits-hoshimi/sovits/resample.py b/spaces/candlend/vits-hoshimi/sovits/resample.py deleted file mode 100644 index fabae4afbb330cccad1681b7941a63547c93c640..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/resample.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.split(os.sep)[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=32000, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/32k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/captchaboy/FAST-ABINet-OCR/modules/attention.py b/spaces/captchaboy/FAST-ABINet-OCR/modules/attention.py deleted file mode 100644 index 7b6a226284e608b44051bb4dc6d6dfac4e1ab20a..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/modules/attention.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -import torch.nn as nn -from .transformer import PositionalEncoding - -class Attention(nn.Module): - def __init__(self, in_channels=512, max_length=25, n_feature=256): - super().__init__() - self.max_length = max_length - - self.f0_embedding = nn.Embedding(max_length, in_channels) - self.w0 = nn.Linear(max_length, n_feature) - self.wv = nn.Linear(in_channels, in_channels) - self.we = nn.Linear(in_channels, max_length) - - self.active = nn.Tanh() - self.softmax = nn.Softmax(dim=2) - - def forward(self, enc_output): - enc_output = enc_output.permute(0, 2, 3, 1).flatten(1, 2) - reading_order = torch.arange(self.max_length, dtype=torch.long, device=enc_output.device) - reading_order = reading_order.unsqueeze(0).expand(enc_output.size(0), -1) # (S,) -> (B, S) - reading_order_embed = self.f0_embedding(reading_order) # b,25,512 - - t = self.w0(reading_order_embed.permute(0, 2, 1)) # b,512,256 - t = self.active(t.permute(0, 2, 1) + self.wv(enc_output)) # b,256,512 - - attn = self.we(t) # b,256,25 - attn = self.softmax(attn.permute(0, 2, 1)) # b,25,256 - g_output = torch.bmm(attn, enc_output) # b,25,512 - return g_output, attn.view(*attn.shape[:2], 8, 32) - - -def encoder_layer(in_c, out_c, k=3, s=2, p=1): - return nn.Sequential(nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - -def decoder_layer(in_c, out_c, k=3, s=1, p=1, mode='nearest', scale_factor=None, size=None): - align_corners = None if mode=='nearest' else True - return nn.Sequential(nn.Upsample(size=size, scale_factor=scale_factor, - mode=mode, align_corners=align_corners), - nn.Conv2d(in_c, out_c, k, s, p), - nn.BatchNorm2d(out_c), - nn.ReLU(True)) - - -class PositionAttention(nn.Module): - def __init__(self, max_length, in_channels=512, num_channels=64, - h=8, w=32, mode='nearest', **kwargs): - super().__init__() - self.max_length = max_length - self.k_encoder = nn.Sequential( - encoder_layer(in_channels, num_channels, s=(1, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)), - encoder_layer(num_channels, num_channels, s=(2, 2)) - ) - self.k_decoder = nn.Sequential( - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, num_channels, scale_factor=2, mode=mode), - decoder_layer(num_channels, in_channels, size=(h, w), mode=mode) - ) - - self.pos_encoder = PositionalEncoding(in_channels, dropout=0, max_len=max_length) - self.project = nn.Linear(in_channels, in_channels) - - def forward(self, x): - N, E, H, W = x.size() - k, v = x, x # (N, E, H, W) - - # calculate key vector - features = [] - for i in range(0, len(self.k_encoder)): - k = self.k_encoder[i](k) - features.append(k) - for i in range(0, len(self.k_decoder) - 1): - k = self.k_decoder[i](k) - k = k + features[len(self.k_decoder) - 2 - i] - k = self.k_decoder[-1](k) - - # calculate query vector - # TODO q=f(q,k) - zeros = x.new_zeros((self.max_length, N, E)) # (T, N, E) - q = self.pos_encoder(zeros) # (T, N, E) - q = q.permute(1, 0, 2) # (N, T, E) - q = self.project(q) # (N, T, E) - - # calculate attention - attn_scores = torch.bmm(q, k.flatten(2, 3)) # (N, T, (H*W)) - attn_scores = attn_scores / (E ** 0.5) - attn_scores = torch.softmax(attn_scores, dim=-1) - - v = v.permute(0, 2, 3, 1).view(N, -1, E) # (N, (H*W), E) - attn_vecs = torch.bmm(attn_scores, v) # (N, T, E) - - return attn_vecs, attn_scores.view(N, -1, H, W) diff --git a/spaces/changkeyculing/chatgpt-detector-single/README.md b/spaces/changkeyculing/chatgpt-detector-single/README.md deleted file mode 100644 index 1b058178a214b34b8a7c7f78bad14108c10968ea..0000000000000000000000000000000000000000 --- a/spaces/changkeyculing/chatgpt-detector-single/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatgpt Detector Single -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -duplicated_from: Hello-SimpleAI/chatgpt-detector-single ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/convert_pytorch_checkpoint_to_tf2.py b/spaces/chendl/compositional_test/transformers/src/transformers/convert_pytorch_checkpoint_to_tf2.py deleted file mode 100644 index f1358408a5cb57ca03503ac56773cb4d9d77ce89..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/convert_pytorch_checkpoint_to_tf2.py +++ /dev/null @@ -1,492 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Convert pytorch checkpoints to TensorFlow""" - - -import argparse -import os - -from . import ( - ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - BART_PRETRAINED_MODEL_ARCHIVE_LIST, - BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP, - DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - DPR_CONTEXT_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST, - DPR_QUESTION_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST, - DPR_READER_PRETRAINED_MODEL_ARCHIVE_LIST, - ELECTRA_PRETRAINED_CONFIG_ARCHIVE_MAP, - FLAUBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP, - LAYOUTLM_PRETRAINED_MODEL_ARCHIVE_LIST, - LXMERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, - T5_PRETRAINED_CONFIG_ARCHIVE_MAP, - TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP, - WAV_2_VEC_2_PRETRAINED_CONFIG_ARCHIVE_MAP, - XLM_PRETRAINED_CONFIG_ARCHIVE_MAP, - XLM_ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, - XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP, - AlbertConfig, - BartConfig, - BertConfig, - CamembertConfig, - CTRLConfig, - DistilBertConfig, - DPRConfig, - ElectraConfig, - FlaubertConfig, - GPT2Config, - LayoutLMConfig, - LxmertConfig, - OpenAIGPTConfig, - RobertaConfig, - T5Config, - TFAlbertForPreTraining, - TFBartForConditionalGeneration, - TFBartForSequenceClassification, - TFBertForPreTraining, - TFBertForQuestionAnswering, - TFBertForSequenceClassification, - TFCamembertForMaskedLM, - TFCTRLLMHeadModel, - TFDistilBertForMaskedLM, - TFDistilBertForQuestionAnswering, - TFDPRContextEncoder, - TFDPRQuestionEncoder, - TFDPRReader, - TFElectraForPreTraining, - TFFlaubertWithLMHeadModel, - TFGPT2LMHeadModel, - TFLayoutLMForMaskedLM, - TFLxmertForPreTraining, - TFLxmertVisualFeatureEncoder, - TFOpenAIGPTLMHeadModel, - TFRobertaForCausalLM, - TFRobertaForMaskedLM, - TFRobertaForSequenceClassification, - TFT5ForConditionalGeneration, - TFTransfoXLLMHeadModel, - TFWav2Vec2Model, - TFXLMRobertaForMaskedLM, - TFXLMWithLMHeadModel, - TFXLNetLMHeadModel, - TransfoXLConfig, - Wav2Vec2Config, - Wav2Vec2Model, - XLMConfig, - XLMRobertaConfig, - XLNetConfig, - is_torch_available, - load_pytorch_checkpoint_in_tf2_model, -) -from .utils import CONFIG_NAME, WEIGHTS_NAME, cached_file, logging - - -if is_torch_available(): - import numpy as np - import torch - - from . import ( - AlbertForPreTraining, - BartForConditionalGeneration, - BertForPreTraining, - BertForQuestionAnswering, - BertForSequenceClassification, - CamembertForMaskedLM, - CTRLLMHeadModel, - DistilBertForMaskedLM, - DistilBertForQuestionAnswering, - DPRContextEncoder, - DPRQuestionEncoder, - DPRReader, - ElectraForPreTraining, - FlaubertWithLMHeadModel, - GPT2LMHeadModel, - LayoutLMForMaskedLM, - LxmertForPreTraining, - LxmertVisualFeatureEncoder, - OpenAIGPTLMHeadModel, - RobertaForMaskedLM, - RobertaForSequenceClassification, - T5ForConditionalGeneration, - TransfoXLLMHeadModel, - XLMRobertaForMaskedLM, - XLMWithLMHeadModel, - XLNetLMHeadModel, - ) - - -logging.set_verbosity_info() - -MODEL_CLASSES = { - "bart": ( - BartConfig, - TFBartForConditionalGeneration, - TFBartForSequenceClassification, - BartForConditionalGeneration, - BART_PRETRAINED_MODEL_ARCHIVE_LIST, - ), - "bert": ( - BertConfig, - TFBertForPreTraining, - BertForPreTraining, - BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "bert-large-uncased-whole-word-masking-finetuned-squad": ( - BertConfig, - TFBertForQuestionAnswering, - BertForQuestionAnswering, - BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "bert-large-cased-whole-word-masking-finetuned-squad": ( - BertConfig, - TFBertForQuestionAnswering, - BertForQuestionAnswering, - BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "bert-base-cased-finetuned-mrpc": ( - BertConfig, - TFBertForSequenceClassification, - BertForSequenceClassification, - BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "dpr": ( - DPRConfig, - TFDPRQuestionEncoder, - TFDPRContextEncoder, - TFDPRReader, - DPRQuestionEncoder, - DPRContextEncoder, - DPRReader, - DPR_CONTEXT_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST, - DPR_QUESTION_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST, - DPR_READER_PRETRAINED_MODEL_ARCHIVE_LIST, - ), - "gpt2": ( - GPT2Config, - TFGPT2LMHeadModel, - GPT2LMHeadModel, - GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "xlnet": ( - XLNetConfig, - TFXLNetLMHeadModel, - XLNetLMHeadModel, - XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "xlm": ( - XLMConfig, - TFXLMWithLMHeadModel, - XLMWithLMHeadModel, - XLM_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "xlm-roberta": ( - XLMRobertaConfig, - TFXLMRobertaForMaskedLM, - XLMRobertaForMaskedLM, - XLM_ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "transfo-xl": ( - TransfoXLConfig, - TFTransfoXLLMHeadModel, - TransfoXLLMHeadModel, - TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "openai-gpt": ( - OpenAIGPTConfig, - TFOpenAIGPTLMHeadModel, - OpenAIGPTLMHeadModel, - OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "roberta": ( - RobertaConfig, - TFRobertaForCausalLM, - TFRobertaForMaskedLM, - RobertaForMaskedLM, - ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "layoutlm": ( - LayoutLMConfig, - TFLayoutLMForMaskedLM, - LayoutLMForMaskedLM, - LAYOUTLM_PRETRAINED_MODEL_ARCHIVE_LIST, - ), - "roberta-large-mnli": ( - RobertaConfig, - TFRobertaForSequenceClassification, - RobertaForSequenceClassification, - ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "camembert": ( - CamembertConfig, - TFCamembertForMaskedLM, - CamembertForMaskedLM, - CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "flaubert": ( - FlaubertConfig, - TFFlaubertWithLMHeadModel, - FlaubertWithLMHeadModel, - FLAUBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "distilbert": ( - DistilBertConfig, - TFDistilBertForMaskedLM, - DistilBertForMaskedLM, - DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "distilbert-base-distilled-squad": ( - DistilBertConfig, - TFDistilBertForQuestionAnswering, - DistilBertForQuestionAnswering, - DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "lxmert": ( - LxmertConfig, - TFLxmertForPreTraining, - LxmertForPreTraining, - LXMERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "lxmert-visual-feature-encoder": ( - LxmertConfig, - TFLxmertVisualFeatureEncoder, - LxmertVisualFeatureEncoder, - LXMERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "ctrl": ( - CTRLConfig, - TFCTRLLMHeadModel, - CTRLLMHeadModel, - CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "albert": ( - AlbertConfig, - TFAlbertForPreTraining, - AlbertForPreTraining, - ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "t5": ( - T5Config, - TFT5ForConditionalGeneration, - T5ForConditionalGeneration, - T5_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "electra": ( - ElectraConfig, - TFElectraForPreTraining, - ElectraForPreTraining, - ELECTRA_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), - "wav2vec2": ( - Wav2Vec2Config, - TFWav2Vec2Model, - Wav2Vec2Model, - WAV_2_VEC_2_PRETRAINED_CONFIG_ARCHIVE_MAP, - ), -} - - -def convert_pt_checkpoint_to_tf( - model_type, pytorch_checkpoint_path, config_file, tf_dump_path, compare_with_pt_model=False, use_cached_models=True -): - if model_type not in MODEL_CLASSES: - raise ValueError(f"Unrecognized model type, should be one of {list(MODEL_CLASSES.keys())}.") - - config_class, model_class, pt_model_class, aws_config_map = MODEL_CLASSES[model_type] - - # Initialise TF model - if config_file in aws_config_map: - config_file = cached_file(config_file, CONFIG_NAME, force_download=not use_cached_models) - config = config_class.from_json_file(config_file) - config.output_hidden_states = True - config.output_attentions = True - print(f"Building TensorFlow model from configuration: {config}") - tf_model = model_class(config) - - # Load weights from tf checkpoint - if pytorch_checkpoint_path in aws_config_map.keys(): - pytorch_checkpoint_path = cached_file( - pytorch_checkpoint_path, WEIGHTS_NAME, force_download=not use_cached_models - ) - # Load PyTorch checkpoint in tf2 model: - tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path) - - if compare_with_pt_model: - tfo = tf_model(tf_model.dummy_inputs, training=False) # build the network - - state_dict = torch.load(pytorch_checkpoint_path, map_location="cpu") - pt_model = pt_model_class.from_pretrained( - pretrained_model_name_or_path=None, config=config, state_dict=state_dict - ) - - with torch.no_grad(): - pto = pt_model(**pt_model.dummy_inputs) - - np_pt = pto[0].numpy() - np_tf = tfo[0].numpy() - diff = np.amax(np.abs(np_pt - np_tf)) - print(f"Max absolute difference between models outputs {diff}") - assert diff <= 2e-2, f"Error, model absolute difference is >2e-2: {diff}" - - # Save pytorch-model - print(f"Save TensorFlow model to {tf_dump_path}") - tf_model.save_weights(tf_dump_path, save_format="h5") - - -def convert_all_pt_checkpoints_to_tf( - args_model_type, - tf_dump_path, - model_shortcut_names_or_path=None, - config_shortcut_names_or_path=None, - compare_with_pt_model=False, - use_cached_models=False, - remove_cached_files=False, - only_convert_finetuned_models=False, -): - if args_model_type is None: - model_types = list(MODEL_CLASSES.keys()) - else: - model_types = [args_model_type] - - for j, model_type in enumerate(model_types, start=1): - print("=" * 100) - print(f" Converting model type {j}/{len(model_types)}: {model_type}") - print("=" * 100) - if model_type not in MODEL_CLASSES: - raise ValueError(f"Unrecognized model type {model_type}, should be one of {list(MODEL_CLASSES.keys())}.") - - config_class, model_class, pt_model_class, aws_model_maps, aws_config_map = MODEL_CLASSES[model_type] - - if model_shortcut_names_or_path is None: - model_shortcut_names_or_path = list(aws_model_maps.keys()) - if config_shortcut_names_or_path is None: - config_shortcut_names_or_path = model_shortcut_names_or_path - - for i, (model_shortcut_name, config_shortcut_name) in enumerate( - zip(model_shortcut_names_or_path, config_shortcut_names_or_path), start=1 - ): - print("-" * 100) - if "-squad" in model_shortcut_name or "-mrpc" in model_shortcut_name or "-mnli" in model_shortcut_name: - if not only_convert_finetuned_models: - print(f" Skipping finetuned checkpoint {model_shortcut_name}") - continue - model_type = model_shortcut_name - elif only_convert_finetuned_models: - print(f" Skipping not finetuned checkpoint {model_shortcut_name}") - continue - print( - f" Converting checkpoint {i}/{len(aws_config_map)}: {model_shortcut_name} - model_type {model_type}" - ) - print("-" * 100) - - if config_shortcut_name in aws_config_map: - config_file = cached_file(config_shortcut_name, CONFIG_NAME, force_download=not use_cached_models) - else: - config_file = config_shortcut_name - - if model_shortcut_name in aws_model_maps: - model_file = cached_file(model_shortcut_name, WEIGHTS_NAME, force_download=not use_cached_models) - else: - model_file = model_shortcut_name - - if os.path.isfile(model_shortcut_name): - model_shortcut_name = "converted_model" - - convert_pt_checkpoint_to_tf( - model_type=model_type, - pytorch_checkpoint_path=model_file, - config_file=config_file, - tf_dump_path=os.path.join(tf_dump_path, model_shortcut_name + "-tf_model.h5"), - compare_with_pt_model=compare_with_pt_model, - ) - if remove_cached_files: - os.remove(config_file) - os.remove(model_file) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--tf_dump_path", default=None, type=str, required=True, help="Path to the output Tensorflow dump file." - ) - parser.add_argument( - "--model_type", - default=None, - type=str, - help=( - f"Model type selected in the list of {list(MODEL_CLASSES.keys())}. If not given, will download and " - "convert all the models from AWS." - ), - ) - parser.add_argument( - "--pytorch_checkpoint_path", - default=None, - type=str, - help=( - "Path to the PyTorch checkpoint path or shortcut name to download from AWS. " - "If not given, will download and convert all the checkpoints from AWS." - ), - ) - parser.add_argument( - "--config_file", - default=None, - type=str, - help=( - "The config json file corresponding to the pre-trained model. \n" - "This specifies the model architecture. If not given and " - "--pytorch_checkpoint_path is not given or is a shortcut name " - "use the configuration associated to the shortcut name on the AWS" - ), - ) - parser.add_argument( - "--compare_with_pt_model", action="store_true", help="Compare Tensorflow and PyTorch model predictions." - ) - parser.add_argument( - "--use_cached_models", - action="store_true", - help="Use cached models if possible instead of updating to latest checkpoint versions.", - ) - parser.add_argument( - "--remove_cached_files", - action="store_true", - help="Remove pytorch models after conversion (save memory when converting in batches).", - ) - parser.add_argument("--only_convert_finetuned_models", action="store_true", help="Only convert finetuned models.") - args = parser.parse_args() - - # if args.pytorch_checkpoint_path is not None: - # convert_pt_checkpoint_to_tf(args.model_type.lower(), - # args.pytorch_checkpoint_path, - # args.config_file if args.config_file is not None else args.pytorch_checkpoint_path, - # args.tf_dump_path, - # compare_with_pt_model=args.compare_with_pt_model, - # use_cached_models=args.use_cached_models) - # else: - convert_all_pt_checkpoints_to_tf( - args.model_type.lower() if args.model_type is not None else None, - args.tf_dump_path, - model_shortcut_names_or_path=[args.pytorch_checkpoint_path] - if args.pytorch_checkpoint_path is not None - else None, - config_shortcut_names_or_path=[args.config_file] if args.config_file is not None else None, - compare_with_pt_model=args.compare_with_pt_model, - use_cached_models=args.use_cached_models, - remove_cached_files=args.remove_cached_files, - only_convert_finetuned_models=args.only_convert_finetuned_models, - ) diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_albert.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_albert.py deleted file mode 100644 index 687a927ef0c486f0bc2607d16f93872dfe03f804..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_albert.py +++ /dev/null @@ -1,1403 +0,0 @@ -# coding=utf-8 -# Copyright 2018 Google AI, Google Brain and the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch ALBERT model.""" - -import math -import os -from dataclasses import dataclass -from typing import Dict, List, Optional, Tuple, Union - -import torch -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPooling, - MaskedLMOutput, - MultipleChoiceModelOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_albert import AlbertConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "albert-base-v2" -_CONFIG_FOR_DOC = "AlbertConfig" - - -ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "albert-base-v1", - "albert-large-v1", - "albert-xlarge-v1", - "albert-xxlarge-v1", - "albert-base-v2", - "albert-large-v2", - "albert-xlarge-v2", - "albert-xxlarge-v2", - # See all ALBERT models at https://huggingface.co/models?filter=albert -] - - -def load_tf_weights_in_albert(model, config, tf_checkpoint_path): - """Load tf checkpoints in a pytorch model.""" - try: - import re - - import numpy as np - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - tf_path = os.path.abspath(tf_checkpoint_path) - logger.info(f"Converting TensorFlow checkpoint from {tf_path}") - # Load weights from TF model - init_vars = tf.train.list_variables(tf_path) - names = [] - arrays = [] - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_path, name) - names.append(name) - arrays.append(array) - - for name, array in zip(names, arrays): - print(name) - - for name, array in zip(names, arrays): - original_name = name - - # If saved from the TF HUB module - name = name.replace("module/", "") - - # Renaming and simplifying - name = name.replace("ffn_1", "ffn") - name = name.replace("bert/", "albert/") - name = name.replace("attention_1", "attention") - name = name.replace("transform/", "") - name = name.replace("LayerNorm_1", "full_layer_layer_norm") - name = name.replace("LayerNorm", "attention/LayerNorm") - name = name.replace("transformer/", "") - - # The feed forward layer had an 'intermediate' step which has been abstracted away - name = name.replace("intermediate/dense/", "") - name = name.replace("ffn/intermediate/output/dense/", "ffn_output/") - - # ALBERT attention was split between self and output which have been abstracted away - name = name.replace("/output/", "/") - name = name.replace("/self/", "/") - - # The pooler is a linear layer - name = name.replace("pooler/dense", "pooler") - - # The classifier was simplified to predictions from cls/predictions - name = name.replace("cls/predictions", "predictions") - name = name.replace("predictions/attention", "predictions") - - # Naming was changed to be more explicit - name = name.replace("embeddings/attention", "embeddings") - name = name.replace("inner_group_", "albert_layers/") - name = name.replace("group_", "albert_layer_groups/") - - # Classifier - if len(name.split("/")) == 1 and ("output_bias" in name or "output_weights" in name): - name = "classifier/" + name - - # No ALBERT model currently handles the next sentence prediction task - if "seq_relationship" in name: - name = name.replace("seq_relationship/output_", "sop_classifier/classifier/") - name = name.replace("weights", "weight") - - name = name.split("/") - - # Ignore the gradients applied by the LAMB/ADAM optimizers. - if ( - "adam_m" in name - or "adam_v" in name - or "AdamWeightDecayOptimizer" in name - or "AdamWeightDecayOptimizer_1" in name - or "global_step" in name - ): - logger.info(f"Skipping {'/'.join(name)}") - continue - - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+_\d+", m_name): - scope_names = re.split(r"_(\d+)", m_name) - else: - scope_names = [m_name] - - if scope_names[0] == "kernel" or scope_names[0] == "gamma": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "output_bias" or scope_names[0] == "beta": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "output_weights": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "squad": - pointer = getattr(pointer, "classifier") - else: - try: - pointer = getattr(pointer, scope_names[0]) - except AttributeError: - logger.info(f"Skipping {'/'.join(name)}") - continue - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - - if m_name[-11:] == "_embeddings": - pointer = getattr(pointer, "weight") - elif m_name == "kernel": - array = np.transpose(array) - try: - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - except AssertionError as e: - e.args += (pointer.shape, array.shape) - raise - print(f"Initialize PyTorch weight {name} from {original_name}") - pointer.data = torch.from_numpy(array) - - return model - - -class AlbertEmbeddings(nn.Module): - """ - Construct the embeddings from word, position and token_type embeddings. - """ - - def __init__(self, config: AlbertConfig): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - self.register_buffer( - "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False - ) - - # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.forward - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - past_key_values_length: int = 0, - ) -> torch.Tensor: - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class AlbertAttention(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads}" - ) - - self.num_attention_heads = config.num_attention_heads - self.hidden_size = config.hidden_size - self.attention_head_size = config.hidden_size // config.num_attention_heads - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.attention_dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.output_dropout = nn.Dropout(config.hidden_dropout_prob) - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.pruned_heads = set() - - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - # Copied from transformers.models.bert.modeling_bert.BertSelfAttention.transpose_for_scores - def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor: - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def prune_heads(self, heads: List[int]) -> None: - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.num_attention_heads, self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.query = prune_linear_layer(self.query, index) - self.key = prune_linear_layer(self.key, index) - self.value = prune_linear_layer(self.value, index) - self.dense = prune_linear_layer(self.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.num_attention_heads = self.num_attention_heads - len(heads) - self.all_head_size = self.attention_head_size * self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: bool = False, - ) -> Union[Tuple[torch.Tensor], Tuple[torch.Tensor, torch.Tensor]]: - mixed_query_layer = self.query(hidden_states) - mixed_key_layer = self.key(hidden_states) - mixed_value_layer = self.value(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer) - key_layer = self.transpose_for_scores(mixed_key_layer) - value_layer = self.transpose_for_scores(mixed_value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.attention_dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - context_layer = context_layer.transpose(2, 1).flatten(2) - - projected_context_layer = self.dense(context_layer) - projected_context_layer_dropout = self.output_dropout(projected_context_layer) - layernormed_context_layer = self.LayerNorm(hidden_states + projected_context_layer_dropout) - return (layernormed_context_layer, attention_probs) if output_attentions else (layernormed_context_layer,) - - -class AlbertLayer(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.full_layer_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.attention = AlbertAttention(config) - self.ffn = nn.Linear(config.hidden_size, config.intermediate_size) - self.ffn_output = nn.Linear(config.intermediate_size, config.hidden_size) - self.activation = ACT2FN[config.hidden_act] - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor]: - attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions) - - ffn_output = apply_chunking_to_forward( - self.ff_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output[0], - ) - hidden_states = self.full_layer_layer_norm(ffn_output + attention_output[0]) - - return (hidden_states,) + attention_output[1:] # add attentions if we output them - - def ff_chunk(self, attention_output: torch.Tensor) -> torch.Tensor: - ffn_output = self.ffn(attention_output) - ffn_output = self.activation(ffn_output) - ffn_output = self.ffn_output(ffn_output) - return ffn_output - - -class AlbertLayerGroup(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - - self.albert_layers = nn.ModuleList([AlbertLayer(config) for _ in range(config.inner_group_num)]) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - ) -> Tuple[Union[torch.Tensor, Tuple[torch.Tensor]], ...]: - layer_hidden_states = () - layer_attentions = () - - for layer_index, albert_layer in enumerate(self.albert_layers): - layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions) - hidden_states = layer_output[0] - - if output_attentions: - layer_attentions = layer_attentions + (layer_output[1],) - - if output_hidden_states: - layer_hidden_states = layer_hidden_states + (hidden_states,) - - outputs = (hidden_states,) - if output_hidden_states: - outputs = outputs + (layer_hidden_states,) - if output_attentions: - outputs = outputs + (layer_attentions,) - return outputs # last-layer hidden state, (layer hidden states), (layer attentions) - - -class AlbertTransformer(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - - self.config = config - self.embedding_hidden_mapping_in = nn.Linear(config.embedding_size, config.hidden_size) - self.albert_layer_groups = nn.ModuleList([AlbertLayerGroup(config) for _ in range(config.num_hidden_groups)]) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ) -> Union[BaseModelOutput, Tuple]: - hidden_states = self.embedding_hidden_mapping_in(hidden_states) - - all_hidden_states = (hidden_states,) if output_hidden_states else None - all_attentions = () if output_attentions else None - - head_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask - - for i in range(self.config.num_hidden_layers): - # Number of layers in a hidden group - layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups) - - # Index of the hidden group - group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups)) - - layer_group_output = self.albert_layer_groups[group_idx]( - hidden_states, - attention_mask, - head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group], - output_attentions, - output_hidden_states, - ) - hidden_states = layer_group_output[0] - - if output_attentions: - all_attentions = all_attentions + layer_group_output[-1] - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class AlbertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = AlbertConfig - load_tf_weights = load_tf_weights_in_albert - base_model_prefix = "albert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights.""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -@dataclass -class AlbertForPreTrainingOutput(ModelOutput): - """ - Output type of [`AlbertForPreTraining`]. - - Args: - loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): - Total loss as the sum of the masked language modeling loss and the next sequence prediction - (classification) loss. - prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - sop_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - prediction_logits: torch.FloatTensor = None - sop_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -ALBERT_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Args: - config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -ALBERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top.", - ALBERT_START_DOCSTRING, -) -class AlbertModel(AlbertPreTrainedModel): - config_class = AlbertConfig - base_model_prefix = "albert" - - def __init__(self, config: AlbertConfig, add_pooling_layer: bool = True): - super().__init__(config) - - self.config = config - self.embeddings = AlbertEmbeddings(config) - self.encoder = AlbertTransformer(config) - if add_pooling_layer: - self.pooler = nn.Linear(config.hidden_size, config.hidden_size) - self.pooler_activation = nn.Tanh() - else: - self.pooler = None - self.pooler_activation = None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Embedding: - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value: nn.Embedding) -> None: - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune: Dict[int, List[int]]) -> None: - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} ALBERT has - a different architecture in that its layers are shared across groups, which then has inner groups. If an ALBERT - model has 12 hidden layers and 2 hidden groups, with two inner groups, there is a total of 4 different layers. - - These layers are flattened: the indices [0,1] correspond to the two inner groups of the first hidden layer, - while [2,3] correspond to the two inner groups of the second hidden layer. - - Any layer with in index other than [0,1,2,3] will result in an error. See base class PreTrainedModel for more - information about head pruning - """ - for layer, heads in heads_to_prune.items(): - group_idx = int(layer / self.config.inner_group_num) - inner_group_idx = int(layer - group_idx * self.config.inner_group_num) - self.encoder.albert_layer_groups[group_idx].albert_layers[inner_group_idx].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - output_attentions: Optional[None] = None, - output_hidden_states: Optional[None] = None, - return_dict: Optional[None] = None, - ) -> Union[BaseModelOutputWithPooling, Tuple]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if attention_mask is None: - attention_mask = torch.ones(input_shape, device=device) - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) - extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(self.dtype).min - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds - ) - encoder_outputs = self.encoder( - embedding_output, - extended_attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = encoder_outputs[0] - - pooled_output = self.pooler_activation(self.pooler(sequence_output[:, 0])) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a - `sentence order prediction (classification)` head. - """, - ALBERT_START_DOCSTRING, -) -class AlbertForPreTraining(AlbertPreTrainedModel): - _keys_to_ignore_on_load_missing = [ - "predictions.decoder.weight", - "predictions.decoder.bias", - "embeddings.position_ids", - ] - - def __init__(self, config: AlbertConfig): - super().__init__(config) - - self.albert = AlbertModel(config) - self.predictions = AlbertMLMHead(config) - self.sop_classifier = AlbertSOPHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self) -> nn.Linear: - return self.predictions.decoder - - def set_output_embeddings(self, new_embeddings: nn.Linear) -> None: - self.predictions.decoder = new_embeddings - - def get_input_embeddings(self) -> nn.Embedding: - return self.albert.embeddings.word_embeddings - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=AlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - sentence_order_label: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[AlbertForPreTrainingOutput, Tuple]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - sentence_order_label (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair - (see `input_ids` docstring) Indices should be in `[0, 1]`. `0` indicates original order (sequence A, then - sequence B), `1` indicates switched order (sequence B, then sequence A). - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, AlbertForPreTraining - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") - >>> model = AlbertForPreTraining.from_pretrained("albert-base-v2") - - >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) - >>> # Batch size 1 - >>> outputs = model(input_ids) - - >>> prediction_logits = outputs.prediction_logits - >>> sop_logits = outputs.sop_logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.albert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output, pooled_output = outputs[:2] - - prediction_scores = self.predictions(sequence_output) - sop_scores = self.sop_classifier(pooled_output) - - total_loss = None - if labels is not None and sentence_order_label is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - sentence_order_loss = loss_fct(sop_scores.view(-1, 2), sentence_order_label.view(-1)) - total_loss = masked_lm_loss + sentence_order_loss - - if not return_dict: - output = (prediction_scores, sop_scores) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return AlbertForPreTrainingOutput( - loss=total_loss, - prediction_logits=prediction_scores, - sop_logits=sop_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -class AlbertMLMHead(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - - self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - self.dense = nn.Linear(config.hidden_size, config.embedding_size) - self.decoder = nn.Linear(config.embedding_size, config.vocab_size) - self.activation = ACT2FN[config.hidden_act] - self.decoder.bias = self.bias - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.activation(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - hidden_states = self.decoder(hidden_states) - - prediction_scores = hidden_states - - return prediction_scores - - def _tie_weights(self) -> None: - # To tie those two weights if they get disconnected (on TPU or when the bias is resized) - self.bias = self.decoder.bias - - -class AlbertSOPHead(nn.Module): - def __init__(self, config: AlbertConfig): - super().__init__() - - self.dropout = nn.Dropout(config.classifier_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, pooled_output: torch.Tensor) -> torch.Tensor: - dropout_pooled_output = self.dropout(pooled_output) - logits = self.classifier(dropout_pooled_output) - return logits - - -@add_start_docstrings( - "Albert Model with a `language modeling` head on top.", - ALBERT_START_DOCSTRING, -) -class AlbertForMaskedLM(AlbertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [ - "predictions.decoder.weight", - "predictions.decoder.bias", - "embeddings.position_ids", - ] - - def __init__(self, config): - super().__init__(config) - - self.albert = AlbertModel(config, add_pooling_layer=False) - self.predictions = AlbertMLMHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self) -> nn.Linear: - return self.predictions.decoder - - def set_output_embeddings(self, new_embeddings: nn.Linear) -> None: - self.predictions.decoder = new_embeddings - - def get_input_embeddings(self) -> nn.Embedding: - return self.albert.embeddings.word_embeddings - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=MaskedLMOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[MaskedLMOutput, Tuple]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - - Returns: - - Example: - - ```python - >>> import torch - >>> from transformers import AutoTokenizer, AlbertForMaskedLM - - >>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") - >>> model = AlbertForMaskedLM.from_pretrained("albert-base-v2") - - >>> # add mask_token - >>> inputs = tokenizer("The capital of [MASK] is Paris.", return_tensors="pt") - >>> with torch.no_grad(): - ... logits = model(**inputs).logits - - >>> # retrieve index of [MASK] - >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] - >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) - >>> tokenizer.decode(predicted_token_id) - 'france' - ``` - - ```python - >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] - >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) - >>> outputs = model(**inputs, labels=labels) - >>> round(outputs.loss.item(), 2) - 0.81 - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_outputs = outputs[0] - - prediction_scores = self.predictions(sequence_outputs) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - ALBERT_START_DOCSTRING, -) -class AlbertForSequenceClassification(AlbertPreTrainedModel): - def __init__(self, config: AlbertConfig): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.albert = AlbertModel(config) - self.dropout = nn.Dropout(config.classifier_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, self.config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="textattack/albert-base-v2-imdb", - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="'LABEL_1'", - expected_loss=0.12, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[SequenceClassifierOutput, Tuple]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - ALBERT_START_DOCSTRING, -) -class AlbertForTokenClassification(AlbertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config: AlbertConfig): - super().__init__(config) - self.num_labels = config.num_labels - - self.albert = AlbertModel(config, add_pooling_layer=False) - classifier_dropout_prob = ( - config.classifier_dropout_prob - if config.classifier_dropout_prob is not None - else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, self.config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[TokenClassifierOutput, Tuple]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.albert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - ALBERT_START_DOCSTRING, -) -class AlbertForQuestionAnswering(AlbertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config: AlbertConfig): - super().__init__(config) - self.num_labels = config.num_labels - - self.albert = AlbertModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="twmkn9/albert-base-v2-squad2", - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - qa_target_start_index=12, - qa_target_end_index=13, - expected_output="'a nice puppet'", - expected_loss=7.36, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - start_positions: Optional[torch.LongTensor] = None, - end_positions: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[AlbertForPreTrainingOutput, Tuple]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits: torch.Tensor = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - ALBERT_START_DOCSTRING, -) -class AlbertForMultipleChoice(AlbertPreTrainedModel): - def __init__(self, config: AlbertConfig): - super().__init__(config) - - self.albert = AlbertModel(config) - self.dropout = nn.Dropout(config.classifier_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - token_type_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[AlbertForPreTrainingOutput, Tuple]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where *num_choices* is the size of the second dimension of the input tensors. (see - *input_ids* above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None - inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - outputs = self.albert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits: torch.Tensor = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/__init__.py deleted file mode 100644 index 34022f0f8c66e21f126a1c861377e4da72ae7c6d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/__init__.py +++ /dev/null @@ -1,216 +0,0 @@ -__version__ = "3.8.4" - -from typing import Tuple - -from . import hdrs as hdrs -from .client import ( - BaseConnector as BaseConnector, - ClientConnectionError as ClientConnectionError, - ClientConnectorCertificateError as ClientConnectorCertificateError, - ClientConnectorError as ClientConnectorError, - ClientConnectorSSLError as ClientConnectorSSLError, - ClientError as ClientError, - ClientHttpProxyError as ClientHttpProxyError, - ClientOSError as ClientOSError, - ClientPayloadError as ClientPayloadError, - ClientProxyConnectionError as ClientProxyConnectionError, - ClientRequest as ClientRequest, - ClientResponse as ClientResponse, - ClientResponseError as ClientResponseError, - ClientSession as ClientSession, - ClientSSLError as ClientSSLError, - ClientTimeout as ClientTimeout, - ClientWebSocketResponse as ClientWebSocketResponse, - ContentTypeError as ContentTypeError, - Fingerprint as Fingerprint, - InvalidURL as InvalidURL, - NamedPipeConnector as NamedPipeConnector, - RequestInfo as RequestInfo, - ServerConnectionError as ServerConnectionError, - ServerDisconnectedError as ServerDisconnectedError, - ServerFingerprintMismatch as ServerFingerprintMismatch, - ServerTimeoutError as ServerTimeoutError, - TCPConnector as TCPConnector, - TooManyRedirects as TooManyRedirects, - UnixConnector as UnixConnector, - WSServerHandshakeError as WSServerHandshakeError, - request as request, -) -from .cookiejar import CookieJar as CookieJar, DummyCookieJar as DummyCookieJar -from .formdata import FormData as FormData -from .helpers import BasicAuth, ChainMapProxy, ETag -from .http import ( - HttpVersion as HttpVersion, - HttpVersion10 as HttpVersion10, - HttpVersion11 as HttpVersion11, - WebSocketError as WebSocketError, - WSCloseCode as WSCloseCode, - WSMessage as WSMessage, - WSMsgType as WSMsgType, -) -from .multipart import ( - BadContentDispositionHeader as BadContentDispositionHeader, - BadContentDispositionParam as BadContentDispositionParam, - BodyPartReader as BodyPartReader, - MultipartReader as MultipartReader, - MultipartWriter as MultipartWriter, - content_disposition_filename as content_disposition_filename, - parse_content_disposition as parse_content_disposition, -) -from .payload import ( - PAYLOAD_REGISTRY as PAYLOAD_REGISTRY, - AsyncIterablePayload as AsyncIterablePayload, - BufferedReaderPayload as BufferedReaderPayload, - BytesIOPayload as BytesIOPayload, - BytesPayload as BytesPayload, - IOBasePayload as IOBasePayload, - JsonPayload as JsonPayload, - Payload as Payload, - StringIOPayload as StringIOPayload, - StringPayload as StringPayload, - TextIOPayload as TextIOPayload, - get_payload as get_payload, - payload_type as payload_type, -) -from .payload_streamer import streamer as streamer -from .resolver import ( - AsyncResolver as AsyncResolver, - DefaultResolver as DefaultResolver, - ThreadedResolver as ThreadedResolver, -) -from .streams import ( - EMPTY_PAYLOAD as EMPTY_PAYLOAD, - DataQueue as DataQueue, - EofStream as EofStream, - FlowControlDataQueue as FlowControlDataQueue, - StreamReader as StreamReader, -) -from .tracing import ( - TraceConfig as TraceConfig, - TraceConnectionCreateEndParams as TraceConnectionCreateEndParams, - TraceConnectionCreateStartParams as TraceConnectionCreateStartParams, - TraceConnectionQueuedEndParams as TraceConnectionQueuedEndParams, - TraceConnectionQueuedStartParams as TraceConnectionQueuedStartParams, - TraceConnectionReuseconnParams as TraceConnectionReuseconnParams, - TraceDnsCacheHitParams as TraceDnsCacheHitParams, - TraceDnsCacheMissParams as TraceDnsCacheMissParams, - TraceDnsResolveHostEndParams as TraceDnsResolveHostEndParams, - TraceDnsResolveHostStartParams as TraceDnsResolveHostStartParams, - TraceRequestChunkSentParams as TraceRequestChunkSentParams, - TraceRequestEndParams as TraceRequestEndParams, - TraceRequestExceptionParams as TraceRequestExceptionParams, - TraceRequestRedirectParams as TraceRequestRedirectParams, - TraceRequestStartParams as TraceRequestStartParams, - TraceResponseChunkReceivedParams as TraceResponseChunkReceivedParams, -) - -__all__: Tuple[str, ...] = ( - "hdrs", - # client - "BaseConnector", - "ClientConnectionError", - "ClientConnectorCertificateError", - "ClientConnectorError", - "ClientConnectorSSLError", - "ClientError", - "ClientHttpProxyError", - "ClientOSError", - "ClientPayloadError", - "ClientProxyConnectionError", - "ClientResponse", - "ClientRequest", - "ClientResponseError", - "ClientSSLError", - "ClientSession", - "ClientTimeout", - "ClientWebSocketResponse", - "ContentTypeError", - "Fingerprint", - "InvalidURL", - "RequestInfo", - "ServerConnectionError", - "ServerDisconnectedError", - "ServerFingerprintMismatch", - "ServerTimeoutError", - "TCPConnector", - "TooManyRedirects", - "UnixConnector", - "NamedPipeConnector", - "WSServerHandshakeError", - "request", - # cookiejar - "CookieJar", - "DummyCookieJar", - # formdata - "FormData", - # helpers - "BasicAuth", - "ChainMapProxy", - "ETag", - # http - "HttpVersion", - "HttpVersion10", - "HttpVersion11", - "WSMsgType", - "WSCloseCode", - "WSMessage", - "WebSocketError", - # multipart - "BadContentDispositionHeader", - "BadContentDispositionParam", - "BodyPartReader", - "MultipartReader", - "MultipartWriter", - "content_disposition_filename", - "parse_content_disposition", - # payload - "AsyncIterablePayload", - "BufferedReaderPayload", - "BytesIOPayload", - "BytesPayload", - "IOBasePayload", - "JsonPayload", - "PAYLOAD_REGISTRY", - "Payload", - "StringIOPayload", - "StringPayload", - "TextIOPayload", - "get_payload", - "payload_type", - # payload_streamer - "streamer", - # resolver - "AsyncResolver", - "DefaultResolver", - "ThreadedResolver", - # streams - "DataQueue", - "EMPTY_PAYLOAD", - "EofStream", - "FlowControlDataQueue", - "StreamReader", - # tracing - "TraceConfig", - "TraceConnectionCreateEndParams", - "TraceConnectionCreateStartParams", - "TraceConnectionQueuedEndParams", - "TraceConnectionQueuedStartParams", - "TraceConnectionReuseconnParams", - "TraceDnsCacheHitParams", - "TraceDnsCacheMissParams", - "TraceDnsResolveHostEndParams", - "TraceDnsResolveHostStartParams", - "TraceRequestChunkSentParams", - "TraceRequestEndParams", - "TraceRequestExceptionParams", - "TraceRequestRedirectParams", - "TraceRequestStartParams", - "TraceResponseChunkReceivedParams", -) - -try: - from .worker import GunicornUVLoopWebWorker, GunicornWebWorker - - __all__ += ("GunicornWebWorker", "GunicornUVLoopWebWorker") -except ImportError: # pragma: no cover - pass diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/outputs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/outputs.py deleted file mode 100644 index b6d2d20c8f5ecc18e4efda53e1882055332d0756..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/outputs.py +++ /dev/null @@ -1,313 +0,0 @@ -# type: ignore -""" -This module defines various classes that can serve as the `output` to an interface. Each class must inherit from -`OutputComponent`, and each class must define a path to its template. All of the subclasses of `OutputComponent` are -automatically added to a registry, which allows them to be easily referenced in other parts of the code. -""" - -from __future__ import annotations - -from typing import Optional - -from gradio import components -from gradio.deprecation import warn_deprecation - - -def warn_outputs_deprecation(): - warn_deprecation( - "Usage of gradio.outputs is deprecated, and will not be supported in the future, " - "please import your components from gradio.components", - ) - - -class Textbox(components.Textbox): - def __init__( - self, - type: str = "text", - label: Optional[str] = None, - ): - warn_outputs_deprecation() - super().__init__(label=label, type=type) - - -class Image(components.Image): - """ - Component displays an output image. - Output type: Union[numpy.array, PIL.Image, str, matplotlib.pyplot, Tuple[Union[numpy.array, PIL.Image, str], List[Tuple[str, float, float, float, float]]]] - """ - - def __init__( - self, type: str = "auto", plot: bool = False, label: Optional[str] = None - ): - """ - Parameters: - type (str): Type of value to be passed to component. "numpy" expects a numpy array with shape (height, width, 3), "pil" expects a PIL image object, "file" expects a file path to the saved image or a remote URL, "plot" expects a matplotlib.pyplot object, "auto" detects return type. - plot (bool): DEPRECATED. Whether to expect a plot to be returned by the function. - label (str): component name in interface. - """ - warn_outputs_deprecation() - if plot: - type = "plot" - super().__init__(type=type, label=label) - - -class Video(components.Video): - """ - Used for video output. - Output type: filepath - """ - - def __init__(self, type: Optional[str] = None, label: Optional[str] = None): - """ - Parameters: - type (str): Type of video format to be passed to component, such as 'avi' or 'mp4'. Use 'mp4' to ensure browser playability. If set to None, video will keep returned format. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(format=type, label=label) - - -class Audio(components.Audio): - """ - Creates an audio player that plays the output audio. - Output type: Union[Tuple[int, numpy.array], str] - """ - - def __init__(self, type: str = "auto", label: Optional[str] = None): - """ - Parameters: - type (str): Type of value to be passed to component. "numpy" returns a 2-set tuple with an integer sample_rate and the data as 16-bit int numpy.array of shape (samples, 2), "file" returns a temporary file path to the saved wav audio file, "auto" detects return type. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(type=type, label=label) - - -class File(components.File): - """ - Used for file output. - Output type: Union[file-like, str] - """ - - def __init__(self, label: Optional[str] = None): - """ - Parameters: - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(label=label) - - -class Dataframe(components.Dataframe): - """ - Component displays 2D output through a spreadsheet interface. - Output type: Union[pandas.DataFrame, numpy.array, List[Union[str, float]], List[List[Union[str, float]]]] - """ - - def __init__( - self, - headers: Optional[list[str]] = None, - max_rows: Optional[int] = 20, - max_cols: Optional[int] = None, - overflow_row_behaviour: str = "paginate", - type: str = "auto", - label: Optional[str] = None, - ): - """ - Parameters: - headers (List[str]): Header names to dataframe. Only applicable if type is "numpy" or "array". - max_rows (int): Maximum number of rows to display at once. Set to None for infinite. - max_cols (int): Maximum number of columns to display at once. Set to None for infinite. - overflow_row_behaviour (str): If set to "paginate", will create pages for overflow rows. If set to "show_ends", will show initial and final rows and truncate middle rows. - type (str): Type of value to be passed to component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for Python array, "auto" detects return type. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__( - headers=headers, - type=type, - label=label, - max_rows=max_rows, - max_cols=max_cols, - overflow_row_behaviour=overflow_row_behaviour, - ) - - -class Timeseries(components.Timeseries): - """ - Component accepts pandas.DataFrame. - Output type: pandas.DataFrame - """ - - def __init__( - self, x: str = None, y: str | list[str] = None, label: Optional[str] = None - ): - """ - Parameters: - x (str): Column name of x (time) series. None if csv has no headers, in which case first column is x series. - y (Union[str, List[str]]): Column name of y series, or list of column names if multiple series. None if csv has no headers, in which case every column after first is a y series. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(x=x, y=y, label=label) - - -class State(components.State): - """ - Special hidden component that stores state across runs of the interface. - Output type: Any - """ - - def __init__(self, label: Optional[str] = None): - """ - Parameters: - label (str): component name in interface (not used). - """ - warn_outputs_deprecation() - super().__init__(label=label) - - -class Label(components.Label): - """ - Component outputs a classification label, along with confidence scores of top categories if provided. Confidence scores are represented as a dictionary mapping labels to scores between 0 and 1. - Output type: Union[Dict[str, float], str, int, float] - """ - - def __init__( - self, - num_top_classes: Optional[int] = None, - type: str = "auto", - label: Optional[str] = None, - ): - """ - Parameters: - num_top_classes (int): number of most confident classes to show. - type (str): Type of value to be passed to component. "value" expects a single out label, "confidences" expects a dictionary mapping labels to confidence scores, "auto" detects return type. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(num_top_classes=num_top_classes, type=type, label=label) - - -class KeyValues: - """ - Component displays a table representing values for multiple fields. - Output type: Union[Dict, List[Tuple[str, Union[str, int, float]]]] - """ - - def __init__(self, value: str = " ", *, label: Optional[str] = None, **kwargs): - """ - Parameters: - value (str): IGNORED - label (str): component name in interface. - """ - raise DeprecationWarning( - "The KeyValues component is deprecated. Please use the DataFrame or JSON " - "components instead." - ) - - -class HighlightedText(components.HighlightedText): - """ - Component creates text that contains spans that are highlighted by category or numerical value. - Output is represent as a list of Tuple pairs, where the first element represents the span of text represented by the tuple, and the second element represents the category or value of the text. - Output type: List[Tuple[str, Union[float, str]]] - """ - - def __init__( - self, - color_map: dict[str, str] = None, - label: Optional[str] = None, - show_legend: bool = False, - ): - """ - Parameters: - color_map (Dict[str, str]): Map between category and respective colors - label (str): component name in interface. - show_legend (bool): whether to show span categories in a separate legend or inline. - """ - warn_outputs_deprecation() - super().__init__(color_map=color_map, label=label, show_legend=show_legend) - - -class JSON(components.JSON): - """ - Used for JSON output. Expects a JSON string or a Python object that is JSON serializable. - Output type: Union[str, Any] - """ - - def __init__(self, label: Optional[str] = None): - """ - Parameters: - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(label=label) - - -class HTML(components.HTML): - """ - Used for HTML output. Expects an HTML valid string. - Output type: str - """ - - def __init__(self, label: Optional[str] = None): - """ - Parameters: - label (str): component name in interface. - """ - super().__init__(label=label) - - -class Carousel(components.Carousel): - """ - Component displays a set of output components that can be scrolled through. - """ - - def __init__( - self, - components: components.Component | list[components.Component], - label: Optional[str] = None, - ): - """ - Parameters: - components (Union[List[Component], Component]): Classes of component(s) that will be scrolled through. - label (str): component name in interface. - """ - warn_outputs_deprecation() - super().__init__(components=components, label=label) - - -class Chatbot(components.Chatbot): - """ - Component displays a chatbot output showing both user submitted messages and responses - Output type: List[Tuple[str, str]] - """ - - def __init__(self, label: Optional[str] = None): - """ - Parameters: - label (str): component name in interface (not used). - """ - warn_outputs_deprecation() - super().__init__(label=label) - - -class Image3D(components.Model3D): - """ - Used for 3D image model output. - Input type: File object of type (.obj, glb, or .gltf) - """ - - def __init__( - self, - clear_color=None, - label: Optional[str] = None, - ): - """ - Parameters: - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_outputs_deprecation() - super().__init__(clear_color=clear_color, label=label) diff --git a/spaces/cihyFjudo/fairness-paper-search/Blacks With Latin Women Sex Porn [TOP].md b/spaces/cihyFjudo/fairness-paper-search/Blacks With Latin Women Sex Porn [TOP].md deleted file mode 100644 index 90da32e40c2ca8aaa3612d4acde1e8a7b448af4b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Blacks With Latin Women Sex Porn [TOP].md +++ /dev/null @@ -1,20 +0,0 @@ -
    -

    Large racial and gender wage gaps in the U.S. remain, even as they have narrowed in some cases over the years. Among full- and part-time workers in the U.S., blacks in 2015 earned just 75% as much as whites in median hourly earnings and women earned 83% as much as men.

    -

    White and Asian women have narrowed the wage gap with white men to a much greater degree than black and Hispanic women. For example, white women narrowed the wage gap in median hourly earnings by 22 cents from 1980 (when they earned, on average, 60 cents for every dollar earned by a white man) to 2015 (when they earned 82 cents). By comparison, black women only narrowed that gap by 9 cents, from earning 56 cents for every dollar earned by a white man in 1980 to 65 cents today. Asian women followed roughly the trajectory of white women (but earned a slightly higher 87 cents per dollar earned by a white man in 2015), whereas Hispanic women fared even worse than black women, narrowing the gap by just 5 cents (earning 58 cents on the dollar in 2015).

    -

    blacks with latin women sex porn


    Download Filehttps://tinurli.com/2uwirl



    -

    The continued cuts to state and local governments also threaten to undermine progress that the public sector has made toward greater wage equality. The economy is losing jobs in a sector (state and local government) that often has smaller pay gaps than the private sector. Especially for people of color and women with high levels of education, this is a step in the wrong direction.

    -

    Our ongoing work to invest in women as they make meaningful contributions within our company and in our communities includes our focus on being a great place to work for our female employees, making the financial lives of our female clients better and advancing the economic empowerment of women in communities around the world.

    -

    Racial groups also differ little in their ideas of the best age to do these things, with the average age seen as ideal for marriage between 25 and 28 for all groups and the average ideal age for having a first child between 27 and 29. Also, racial groups have in common that, in each group, compared to women, men see a slightly higher age as appropriate for themselves to take on these roles.

    -

    One attitude about hooking up also shows virtually no racial differences: in all groups, approximately 70%-75% of men and women say they would be less interested in a relationship with a person who hooks up a lot, as this graph shows.

    -

    Another behavioral indicator is the number of partners with whom one has ever had intercourse. In this analysis, virgins count as having zero partners, and we limited the count to the number of partners of the other sex, ignoring any same-sex partners. We examine medians rather than averages because there are some extreme outliers with many partners, which affect means more than medians. The graph below shows that among both women and men, Asians have had the least number of partners. Among women, there is little difference between Whites, Blacks, and Latinas, all of whom have had between 1.5 and 2 partners. Among men, however, Blacks have substantially more than other men, with a median slightly more than 4, compared to between 2 and 2.5 for Whites and Latinx men, and less for Asian men.

    -

    -

    As for differences between Blacks and other racial groups, the patterns differ strongly by gender. As many race and gender scholars have argued, an intersectional approach is often needed when the way race affects men and women is very different. Let us start with men. Black men have had more sexual partners than White or other men. What explains this? Prior research has shown that youth of any race who grow up in poverty or with less educated parents are more likely to have first intercourse earlier and Blacks are especially likely to grow up disadvantaged. Consistent with this, past research on the age of sexual debut among US adolescents, has shown that Black youth have an earlier age at first intercourse than Whites, which is likely to lead to more sexual experience by the age of most of the college respondents. (The survey question asks about number of partners ever, not only during college, so having started during or before high school could lead to a higher lifetime number.)

    -

    Grace, I will be forthcoming from the start. I am a single Caucasian male in my mid 30s. I am slightly above average in appearance on my best day. My personal preference for relationship partners is non-american. I like accents. I do typically date oriental Asians admittedly. If I had to choose a group that I considered most physically appealing it would be Indian or latina. Actual race preference is none. Kind, gentle hearted people win me over with relative ease.

    -

    1: Black women ARE the least sought after, at least in the US. I do not know why, but I can give some common generalized reasons that I'm certain contribute. My own personal opinions, when added, will be in brackets, and not necessarily only meant for you. a) Men like submissive women. Black women are stereotyped to be even angrier and stubborn than the feminist woman. [I believe both the man and the woman should submit to one another. Love is not proud] b) Men are insecure. Men assume a black woman has been with a black man, that all black men are hung, that the black woman wants a hung man, and that she will never be truly satisfied with him sexually. [Even taking the average of the largest sized group -black- and the smallest -asian- its only an inch difference. For white and black its 1/4 of a cm. Men are stupid. And most women have no preference, and most women with a preference prefer average. ....Lucky me lol.] c) When you marry someone, you often marry the friends and family and most non-black Americans fear black neighborhoods and black people. d) Most white people think most blacks hate white people. (This ties into the last one) e) Back to the ghetto. I pride myself in loving, and this one is specifically my opinion but even I won't date a woman from the ghetto unless she manages to make me fall in love on first contact. It's any race though. I don't personally care about the poverty aspect, as I am old school in believing a man should support his family and the woman should take care of the children (which I also believe is the greatest honor, job, responsibility that any person could ever wish for and if she don't want to then I will, the child deserves full time care. I hope she does though. 9 months isn't the end of creating, 18 more years of shaping this wonderful person is the epitome of creation).
    2) I'd like to say something..
    Racism, is just hatred given a name, sometimes even at ones self. We need to stop teaching history. Most people aren't capable of handling it.

    -

    2) regarding porn (if u didn't like me before, I'm in trouble now lol).. Porn hub, lists men searching ebony 2nd (Japanese higher). 2nd is still high. However, the states with the most viewership account for majority of the blacks in america, and (from a .gov website) blacks view more porn than anyone, with a greater increase in frequency meaning their lead is increasing. So blacks watch the most, and watch black porn the most. Makes sense. Race preference, I prefer the man in the video to be white, and not distinguishable, as I'm pretending I'm him lol.. Women, pretty faces with no tats.. Race irrelevant, positions relevant (probably the only guy who don't like doggy)

    -

    I totally agree with the comments of professorh, ale wallace, and grace ojo. How you going to do a study and then use racist stereotypes for explanation of those results? Black women are breaking stereotypes and are more conservative than others think.

    -

    You are using the word "median" incorrectly. A discrete number of sexual partners can never have a median containing a decimal value.
    0 0 1 1 1 1 1 1 2 2 3
    If the above is your data, the median is 1, the mean is 13/11. I believe you are calculating means up there. The real median values would actually be more useful here, since someone with an extremely high number of partners skews the data by a large amount.

    -

    The idea that an accepted explanation for black female students being virgins is not being sought after, (basically saying they cant find men to f*ck them) is laughable. It is very easy as a young student to find someone to have s*x with. It's clear that they just cannot accept that Black women have higher morals than white/hispanic women. With Black men eager for hookups, you dont think these men are begging black students for s*x. Black men back in the day would always say they went after white women because they were "easier". As a black female, i'm very aware that people know so little about my tribe and just base their opinions on stereotypes. I don't even care at this point

    -

    I am a white female, and I was recently in a 3 year relationship with a black man. He watched porn constantly. I noticed that 90% of his DVDs were asian girls. And I tried to ask him what was the reason or what about asian girls turned him on, but he, as guys do... denied he had any specific preference. He also cheated on me numerous times, eventually claiming that it's a man's right to have "side pieces". And his "main" one was Portarican and 1/2 his age. He told me that he was not missing anything or unhappy with our sex life at all. So, obviously I was not only hurts but confused why he strayed so often. He said it was in his jeans, as men need not only more than 1 partner, they have their "main" one that they get sex, someone who does all the things they need like cook for them, clean, who is always there so that they are never without. But they expect their main to be 100% faithful to them. I never will understand a man's way or beliefs about what a "true" relationship or commitment is to be. I have heard that ALL black men cheat, I do agree to an extent, and have noticed their obsession with porn also. They as I've noticed feel it's normal and have told me we females need to just accept it. Men have double standards not only in life but also relationships. They to me want a woman to be completely committed to them, while they do as they please. It's such BS. They like variety and feel it's a man's right. I've never cheated even when I became aware my partner was, even tho I was not satisfied with our sex life. Bottom line for me is...
    Men need to spend more time satisfying their "main" partner because IF your girl uses the typical, having no sex thing in the relationship,....she isn't denying herself of anything because she's not being satisfied by him. He's the only one deprived. HELLO! Men don't get that at all seriously. We females are expected to do all the work. We are to perform their fantasies for them, do what they want or need to get them "aroused" fulfill what he believes is your duty to him as his partner. Yet, he doesn't do anything for you, he doesn't do any for play, doesn't do anything to arouse you, he slaps spit on you, or oil. And just busts a nut. He's satisfied, but you ain't. Men don't seem to understand this. In thier mind, they are like, I'm getting my needs met, what is her problem. It's very frustrating to me honestly. Now they have designed a "realistic doll" that is SO damn realistic it's scary. To me, they have replaced "real" woman it's sick. Guys spend most of their time looking for ways, or things OUTSIDE OF the relationship to satisfy themselves rather than put that effort and time into being the other half of their relationship. We have needs also. Men always complain about us, and believe we are drama, we bitch about everything constantly, we get in their business by asking them questions, etc. IF they were committed to the relationship as we are, and live as a loyal man to their "main" they'd have no reason or desire to stray or need for another woman. This world has become all about,
    HUSTLING, GETTING WHATEVER YOU FEEL IS YOURS, HAVING SEX WITH WHOEVER. It's become so selfish as if today us females only exist to please men. We are not to have our own thoughts, feelings, needs, opinions, etc. If a guy needs to be satisfied and you are the one who just happens to be around, well in his mind, GET IT DONE. It's not only so rude, but disrespectful to us as females and human beings. I love sex, and I'm an older woman, and having sex many times a day is my preference. But I'm turned off when guys talk about it the whole time we are together. Or OMG ..they grab their phone while we are having sex and watch a porn video. It's so fucking rude. If they feel that it's normal to do that, they have issues in my opinion. To me it's expecting ME to perform and do the sex acts while they are looking at another female and fanticising the girl in the video is who's sucking them or sexually pleasing them. But your doing all the work. It's so insulting to me. And they don't seem to understand why. But when I turn the table and present the situation to them and ask how they would feel or if it's OK for me to do the same thing ITS NEVER OK they respond saying that it's disrespectful to them. But don't feel that it's wrong for them. Double standard and to me NARCISSISTIC 100%.
    Men these days need a realty check for real.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Easyboost Photo Print Crack Create Stunning Photo Layouts with Ease.md b/spaces/cihyFjudo/fairness-paper-search/Easyboost Photo Print Crack Create Stunning Photo Layouts with Ease.md deleted file mode 100644 index 242c8fc78247e510e0ec8bc63255c7d3a72d280e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Easyboost Photo Print Crack Create Stunning Photo Layouts with Ease.md +++ /dev/null @@ -1,6 +0,0 @@ -

    easyboostphotoprintcrack


    Download File ❤❤❤ https://tinurli.com/2uwkwt



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Enjoy Hitman Reborn Opening 5 HD Quality 720p The Best Anime Theme Song Ever.md b/spaces/cihyFjudo/fairness-paper-search/Enjoy Hitman Reborn Opening 5 HD Quality 720p The Best Anime Theme Song Ever.md deleted file mode 100644 index cf3bfb36a93e752c39fa3ded414069b2fcf18c18..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Enjoy Hitman Reborn Opening 5 HD Quality 720p The Best Anime Theme Song Ever.md +++ /dev/null @@ -1,6 +0,0 @@ -

    !FULL! Download Hadrah Basaudan Pdf To 12l


    Download 🆗 https://tinurli.com/2uwhOp



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/From Up on Poppy Hill Eng Sub Download Film A Guide to the Best Sources and Quality.md b/spaces/cihyFjudo/fairness-paper-search/From Up on Poppy Hill Eng Sub Download Film A Guide to the Best Sources and Quality.md deleted file mode 100644 index 3a3498356df887fd5d0161ee44cce5eeef205390..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/From Up on Poppy Hill Eng Sub Download Film A Guide to the Best Sources and Quality.md +++ /dev/null @@ -1,15 +0,0 @@ - -

    From Up on Poppy Hill premiered on July 16, 2011, in Japan. It received positive reviews from most film critics and grossed $61 million worldwide. An English version was distributed by GKIDS; it was released to theaters on March 15, 2013, in North America.[6]

    -

    From Up on Poppy Hill was officially revealed as the new Studio Ghibli film for 2011 on December 15, 2010.[13] It is based on the 1980s shōjo manga of the same name by Tetsuo Sayama and Chizuru Takahashi.[14] It was revealed the director Gorō Miyazaki would be directing.[13] Gorō Miyazaki is the eldest son of Studio Ghibli's co-founder and acclaimed director Hayao Miyazaki; he made his directorial debut in the 2006 film Tales from Earthsea.[13] From Up on Poppy Hill is his second work.[15]

    -

    from up on poppy hill eng sub download film


    Download File ✵✵✵ https://tinurli.com/2uwjnx



    -

    In a press interview given after the 2011 Tōhoku earthquake and tsunami, it was announced the film's production was affected by the rolling blackouts imposed after this disaster.[17] In particular, the animation process was forced to proceed in the night to minimize disruptions.[17] When pressed about the progress, it was revealed that the animation was "about 50% completed", though it was added that the "animation would have otherwise been over 70% completed without the disaster".[17] However, Hayao Miyazaki assured the public that the film would still be released on July 16, 2011, as previously announced, saying that it was their responsibility to do so.[17] Gorō Miyazaki stated that while most of the staff was not affected by the disaster, there were several "who did go through a period of mental affectedness because of what happened and that took some time to recover from."[18]

    -

    On August 17, 2011, it was announced that From Up on Poppy Hill would be one of the Japanese films being showcased at the 2011 Toronto International Film Festival, which was held from September 8 to 18, 2011.[30] It was also revealed that the film would be showcased in the "Japan International Premiere" section, which is part of the "Contemporary World Cinema" event in the festival.[30]

    -

    Mark Schilling of The Japan Times described From Up on Poppy Hill as a "pure-hearted, melodramatic youth film".[42] The reviewer criticized the story as "predictable" and called the direction "pedestrian".[42] However, he concluded the review by praising the film, saying "a wealth of period detail brings the era to nostalgic/realistic life".[42] Takashi Kondo of The Daily Yomiuri said that it "is filled with many experiences that have been lost in our daily life".[43] Kondo also said that "the father-son joint production [of Hayao and Gorō Miyazaki] achieved a wonderful result and [From Up on Poppy Hill] is a work that needs to be seen in this day and age".[43]

    -

    A. O. Scott of The New York Times praised From Up on Poppy Hill for its visuals as well as its characterization. Although Scott said that the "specific tragedy that lies in the background may not register with children," he would say that adults are "likely to be charmed by the love story and enchanted by the delicate rendering of a bygone but not entirely forgotten era".[44] Kenneth Turan of the Los Angeles Times called the film "a time-machine dream of a not-so-distant past, a sweet and honestly sentimental story that also represents a collaboration between the greatest of Japanese animators and his up-and-coming son." Turan also said that Latin Quarter "is "Poppy Hill" at its most fantastical." On the characterizations, Turan stated, "the respect and politeness with which all the characters, even the teenage protagonists, treat one another is a far cry from what can go on in this day and age."[45] Scott Tobias of NPR argued that the thematical aspects were too obvious but that "the warm tenor of the film that ultimately rescues it."[46]

    -

    Uncharacteristically violent for a normal Ghibli film, Mononoke tells the story of Ashitaka, searching for the cure to a curse, and finding himself caught up in a conflict between forest spirits and an evil mining company. Bloody, but brilliant, this film highlights some of the best high-octane animations from Miyazaki.

    -

    Shōya had been bullying a deaf student named Shōko Nishimiya and has alienated himself from his classmates for doing so, becoming the victim of bullying himself. He sets out for possible redemption with her throughout the film. This is a great anime movie with a strong anti-bullying message, as well as emotional depth that will resonate with anyone who watches it.

    -

    -

    Director Shinichirō Watanabe has said that he viewed Cowboy Bebop as miniature films, and indeed viewed the film as just an extension of that premise. Lovers of the anime will not be disappointed; it retains that signature style from the anime T.V. series and adds an amazing musical score and visual flares that tie this work to the series in a beautiful way.

    -

    This anime tells the story of students trying to save their school clubhouse from demolition before the Tokyo Olympics in 1964. This is primarily a character piece. The characters are so strong in this anime movie, and the subject of friendship is so prevalent and touching throughout the film.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/ck46/extractive_summaries/paraphraser.py b/spaces/ck46/extractive_summaries/paraphraser.py deleted file mode 100644 index 60b641a037fbf980c13b4f56b27a9fb2880e876c..0000000000000000000000000000000000000000 --- a/spaces/ck46/extractive_summaries/paraphraser.py +++ /dev/null @@ -1,189 +0,0 @@ -import re -import numpy as np -import itertools -import torch - -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -from sentence_transformers import SentenceTransformer -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.metrics.pairwise import cosine_similarity - - -class KeywordExtraction: - def __init__(self, n_gram_range=(1, 1), stop_words='english', model_name='distilbert-base-nli-mean-tokens'): - self.n_gram_range = n_gram_range - self.stop_words = stop_words - self.model_name = model_name - self.model = SentenceTransformer(self.model_name) - - def __call__(self, doc, top_n=5, diversity=('mmr', 0.7)): - doc_embedding = self.get_document_embeddings(doc) - candidates = self.get_candidates(doc) - candidate_embeddings = self.get_candidate_embeddings(candidates) - try: - if diversity[0] == 'mmr': - # print('using maximal marginal relevance method...') - return self.maximal_marginal_relevance(doc_embedding, - candidate_embeddings, - candidates, - top_n=top_n, - diversity=diversity[1]) - elif diversity[0] == 'mss': - # print('using max sum similarity method...') - return self.max_sum_similarity(doc_embedding, - candidate_embeddings, - candidates, - top_n=top_n, - nr_candidates=diversity[1]) - else: - # print('using default method...') - return self.get_keywords(doc_embedding, candidate_embeddings, candidates, top_n) - except Exception as e: - print(e) - - def get_candidates(self, doc): - # Extract candidate words/phrases - count = CountVectorizer(ngram_range=self.n_gram_range, stop_words=self.stop_words).fit([doc]) - return count.get_feature_names_out() - - def get_candidate_embeddings(self, candidates): - return self.model.encode(candidates) - - def get_document_embeddings(self, doc): - return self.model.encode([doc]) - - def get_keywords(self, doc_embedding, candidate_embeddings, candidates, top_n=5): - distances = cosine_similarity(doc_embedding, candidate_embeddings) - keywords = [candidates[index] for index in distances.argsort()[0][-top_n:]] - return keywords - - def max_sum_similarity(self, doc_embedding, candidate_embeddings, candidates, top_n, nr_candidates): - # Calculate distances and extract keywords - distances = cosine_similarity(doc_embedding, candidate_embeddings) - distances_candidates = cosine_similarity(candidate_embeddings, - candidate_embeddings) - - # Get top_n words as candidates based on cosine similarity - words_idx = list(distances.argsort()[0][-nr_candidates:]) - words_vals = [candidates[index] for index in words_idx] - distances_candidates = distances_candidates[np.ix_(words_idx, words_idx)] - - # Calculate the combination of words that are the least similar to each other - min_sim = np.inf - candidate = None - for combination in itertools.combinations(range(len(words_idx)), top_n): - sim = sum([distances_candidates[i][j] for i in combination for j in combination if i != j]) - if sim < min_sim: - candidate = combination - min_sim = sim - - return [words_vals[idx] for idx in candidate] - - def maximal_marginal_relevance(self, doc_embedding, word_embeddings, words, top_n, diversity): - # Extract similarity within words, and between words and the document - word_doc_similarity = cosine_similarity(word_embeddings, doc_embedding) - word_similarity = cosine_similarity(word_embeddings) - - # Initialize candidates and already choose best keyword/keyphras - keywords_idx = [np.argmax(word_doc_similarity)] - candidates_idx = [i for i in range(len(words)) if i != keywords_idx[0]] - - for _ in range(top_n - 1): - # Extract similarities within candidates and - # between candidates and selected keywords/phrases - candidate_similarities = word_doc_similarity[candidates_idx, :] - target_similarities = np.max(word_similarity[candidates_idx][:, keywords_idx], axis=1) - - # Calculate MMR - mmr = (1-diversity) * candidate_similarities - diversity * target_similarities.reshape(-1, 1) - mmr_idx = candidates_idx[np.argmax(mmr)] - - # Update keywords & candidates - keywords_idx.append(mmr_idx) - candidates_idx.remove(mmr_idx) - - return [words[idx] for idx in keywords_idx] - - -def regex(phrase, m=0, n=3): - strng = "([\s]*[a-zA-Z0-9]*[\s]*){%d,%d}" % (m,n) - return strng.join(phrase.split()) - -def remove_square_brackets(text): - return re.sub('\[[0-9]+\]', '', text) - -def remove_extra_spaces(text): - return re.sub('[\s]{2,}', ' ', text) - - -def preprocess_text(text): - text = re.sub('\[[0-9]+\]', '', text) - text = re.sub('[\s]{2,}', ' ', text) - text = text.strip() - return text - -def sent_tokenize(text): - sents = text.split('.') - sents = [s.strip() for s in sents if len(s)>0] - return sents - -def get_key_sentences(text, top_n=5, diversity=('mmr', 0.6)): - kw_extractor = KeywordExtraction(n_gram_range=(1,3)) - text = preprocess_text(text) - sentences = sent_tokenize(text) - key_phrases = kw_extractor(text, top_n=top_n, diversity=diversity) - - if key_phrases is None: - return None - - key_sents = dict() - for phrase in key_phrases: - found = False - for i, sent in enumerate(sentences): - if re.search(regex(phrase), sent): - found = True - if i not in key_sents: - key_sents[i] = sent - if not found: - print(f'The phrase "{phrase}" was not matched!') - return key_sents - - -class ParaphraseModel: - def __init__(self, model_name="Vamsi/T5_Paraphrase_Paws"): - self.model_name = model_name - self.tokenizer = AutoTokenizer.from_pretrained(self.model_name) - self.model = AutoModelForSeq2SeqLM.from_pretrained(self.model_name) - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - def __call__(self, inputs, top_k=200, top_p=0.95, num_sequences=5): - text = self.prepare_list_input(inputs) if type(inputs) == type([]) else f"paraphrase: {inputs} " - - encoding = self.tokenizer.batch_encode_plus(text, pad_to_max_length=True, return_tensors="pt") - - input_ids = encoding["input_ids"].to(self.device) - attention_masks = encoding["attention_mask"].to(self.device) - - outputs = self.model.generate( - input_ids=input_ids, attention_mask=attention_masks, - max_length=256, - do_sample=True, - top_k=top_k, - top_p=top_p, - early_stopping=True, - num_return_sequences=num_sequences - ) - - lines = [] - for output in outputs: - line = self.tokenizer.decode(output, - skip_special_tokens=True, - clean_up_tokenization_spaces=True) - lines.append(line) - return lines - - def prepare_list_input(self, lst): - sentences = [] - for sent in lst: - sentences.append(f"paraphrase: {sent} ") - return sentences diff --git a/spaces/cleanmaster/akagi-sovits3/resample.py b/spaces/cleanmaster/akagi-sovits3/resample.py deleted file mode 100644 index fabae4afbb330cccad1681b7941a63547c93c640..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/resample.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.split(os.sep)[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=32000, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/32k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/_soundfile_data/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/_soundfile_data/__init__.py deleted file mode 100644 index 2bf8216292c824cee30c61c156dd3d202c303531..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/_soundfile_data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# this file makes _soundfile_data importable, so we can query its path -# when searching for the libsndfile binaries. -pass diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http_exceptions.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http_exceptions.py deleted file mode 100644 index b5d16ea4ec1058f4e9c011677b8b34ffadc22622..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/aiohttp/http_exceptions.py +++ /dev/null @@ -1,107 +0,0 @@ -"""Low-level http related exceptions.""" - - -from textwrap import indent -from typing import Optional, Union - -from .typedefs import _CIMultiDict - -__all__ = ("HttpProcessingError",) - - -class HttpProcessingError(Exception): - """HTTP error. - - Shortcut for raising HTTP errors with custom code, message and headers. - - code: HTTP Error code. - message: (optional) Error message. - headers: (optional) Headers to be sent in response, a list of pairs - """ - - code = 0 - message = "" - headers = None - - def __init__( - self, - *, - code: Optional[int] = None, - message: str = "", - headers: Optional[_CIMultiDict] = None, - ) -> None: - if code is not None: - self.code = code - self.headers = headers - self.message = message - - def __str__(self) -> str: - msg = indent(self.message, " ") - return f"{self.code}, message:\n{msg}" - - def __repr__(self) -> str: - return f"<{self.__class__.__name__}: {self.code}, message={self.message!r}>" - - -class BadHttpMessage(HttpProcessingError): - - code = 400 - message = "Bad Request" - - def __init__(self, message: str, *, headers: Optional[_CIMultiDict] = None) -> None: - super().__init__(message=message, headers=headers) - self.args = (message,) - - -class HttpBadRequest(BadHttpMessage): - - code = 400 - message = "Bad Request" - - -class PayloadEncodingError(BadHttpMessage): - """Base class for payload errors""" - - -class ContentEncodingError(PayloadEncodingError): - """Content encoding error.""" - - -class TransferEncodingError(PayloadEncodingError): - """transfer encoding error.""" - - -class ContentLengthError(PayloadEncodingError): - """Not enough data for satisfy content length header.""" - - -class LineTooLong(BadHttpMessage): - def __init__( - self, line: str, limit: str = "Unknown", actual_size: str = "Unknown" - ) -> None: - super().__init__( - f"Got more than {limit} bytes ({actual_size}) when reading {line}." - ) - self.args = (line, limit, actual_size) - - -class InvalidHeader(BadHttpMessage): - def __init__(self, hdr: Union[bytes, str]) -> None: - if isinstance(hdr, bytes): - hdr = hdr.decode("utf-8", "surrogateescape") - super().__init__(f"Invalid HTTP Header: {hdr}") - self.hdr = hdr - self.args = (hdr,) - - -class BadStatusLine(BadHttpMessage): - def __init__(self, line: str = "") -> None: - if not isinstance(line, str): - line = repr(line) - super().__init__(f"Bad status line {line!r}") - self.args = (line,) - self.line = line - - -class InvalidURLError(BadHttpMessage): - pass diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dsd.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dsd.c deleted file mode 100644 index e039302c99a541f29407de554750b9719fc9ecfc..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dsd.c +++ /dev/null @@ -1,130 +0,0 @@ -/* - * Direct Stream Digital (DSD) decoder - * based on BSD licensed dsd2pcm by Sebastian Gesemann - * Copyright (c) 2009, 2011 Sebastian Gesemann. All rights reserved. - * Copyright (c) 2014 Peter Ross - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include "libavutil/attributes.h" -#include "libavutil/reverse.h" -#include "libavutil/thread.h" -#include "dsd.h" - -#define CTABLES ((HTAPS + 7) / 8) /** number of "8 MACs" lookup tables */ - -/* - * Properties of this 96-tap lowpass filter when applied on a signal - * with sampling rate of 44100*64 Hz: - * - * () has a delay of 17 microseconds. - * - * () flat response up to 48 kHz - * - * () if you downsample afterwards by a factor of 8, the - * spectrum below 70 kHz is practically alias-free. - * - * () stopband rejection is about 160 dB - * - * The coefficient tables ("ctables") take only 6 Kibi Bytes and - * should fit into a modern processor's fast cache. - */ - -/** - * The 2nd half (48 coeffs) of a 96-tap symmetric lowpass filter - */ -static const double htaps[HTAPS] = { - 0.09950731974056658, 0.09562845727714668, 0.08819647126516944, - 0.07782552527068175, 0.06534876523171299, 0.05172629311427257, - 0.0379429484910187, 0.02490921351762261, 0.0133774746265897, - 0.003883043418804416, -0.003284703416210726, -0.008080250212687497, - -0.01067241812471033, -0.01139427235000863, -0.0106813877974587, - -0.009007905078766049, -0.006828859761015335, -0.004535184322001496, - -0.002425035959059578, -0.0006922187080790708, 0.0005700762133516592, - 0.001353838005269448, 0.001713709169690937, 0.001742046839472948, - 0.001545601648013235, 0.001226696225277855, 0.0008704322683580222, - 0.0005381636200535649, 0.000266446345425276, 7.002968738383528e-05, - -5.279407053811266e-05, -0.0001140625650874684, -0.0001304796361231895, - -0.0001189970287491285, -9.396247155265073e-05, -6.577634378272832e-05, - -4.07492895872535e-05, -2.17407957554587e-05, -9.163058931391722e-06, - -2.017460145032201e-06, 1.249721855219005e-06, 2.166655190537392e-06, - 1.930520892991082e-06, 1.319400334374195e-06, 7.410039764949091e-07, - 3.423230509967409e-07, 1.244182214744588e-07, 3.130441005359396e-08 -}; - -static float ctables[CTABLES][256]; - -static av_cold void dsd_ctables_tableinit(void) -{ - int t, e, m, sign; - double acc[CTABLES]; - for (e = 0; e < 256; ++e) { - memset(acc, 0, sizeof(acc)); - for (m = 0; m < 8; ++m) { - sign = (((e >> (7 - m)) & 1) * 2 - 1); - for (t = 0; t < CTABLES; ++t) - acc[t] += sign * htaps[t * 8 + m]; - } - for (t = 0; t < CTABLES; ++t) - ctables[CTABLES - 1 - t][e] = acc[t]; - } -} - -av_cold void ff_init_dsd_data(void) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - ff_thread_once(&init_static_once, dsd_ctables_tableinit); -} - -void ff_dsd2pcm_translate(DSDContext* s, size_t samples, int lsbf, - const uint8_t *src, ptrdiff_t src_stride, - float *dst, ptrdiff_t dst_stride) -{ - uint8_t buf[FIFOSIZE]; - unsigned pos, i; - uint8_t* p; - double sum; - - pos = s->pos; - - memcpy(buf, s->buf, sizeof(buf)); - - while (samples-- > 0) { - buf[pos] = lsbf ? ff_reverse[*src] : *src; - src += src_stride; - - p = buf + ((pos - CTABLES) & FIFOMASK); - *p = ff_reverse[*p]; - - sum = 0.0; - for (i = 0; i < CTABLES; i++) { - uint8_t a = buf[(pos - i) & FIFOMASK]; - uint8_t b = buf[(pos - (CTABLES*2 - 1) + i) & FIFOMASK]; - sum += ctables[i][a] + ctables[i][b]; - } - - *dst = (float)sum; - dst += dst_stride; - - pos = (pos + 1) & FIFOMASK; - } - - s->pos = pos; - memcpy(s->buf, buf, sizeof(buf)); -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/internal.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/internal.h deleted file mode 100644 index a283c52e01229bfb86909accf9630320da976c0b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/internal.h +++ /dev/null @@ -1,247 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * common internal api header. - */ - -#ifndef AVCODEC_INTERNAL_H -#define AVCODEC_INTERNAL_H - -#include - -#include "libavutil/buffer.h" -#include "libavutil/channel_layout.h" -#include "libavutil/mathematics.h" -#include "libavutil/pixfmt.h" -#include "avcodec.h" -#include "config.h" - -#if CONFIG_LCMS2 -# include "fflcms2.h" -#endif - -#define FF_SANE_NB_CHANNELS 512U - -#if HAVE_SIMD_ALIGN_64 -# define STRIDE_ALIGN 64 /* AVX-512 */ -#elif HAVE_SIMD_ALIGN_32 -# define STRIDE_ALIGN 32 -#elif HAVE_SIMD_ALIGN_16 -# define STRIDE_ALIGN 16 -#else -# define STRIDE_ALIGN 8 -#endif - -typedef struct AVCodecInternal { - /** - * When using frame-threaded decoding, this field is set for the first - * worker thread (e.g. to decode extradata just once). - */ - int is_copy; - - /** - * An audio frame with less than required samples has been submitted (and - * potentially padded with silence). Reject all subsequent frames. - */ - int last_audio_frame; - - /** - * Audio encoders can set this flag during init to indicate that they - * want the small last frame to be padded to a multiple of pad_samples. - */ - int pad_samples; - - AVBufferRef *pool; - - void *thread_ctx; - - /** - * This packet is used to hold the packet given to decoders - * implementing the .decode API; it is unused by the generic - * code for decoders implementing the .receive_frame API and - * may be freely used (but not freed) by them with the caveat - * that the packet will be unreferenced generically in - * avcodec_flush_buffers(). - */ - AVPacket *in_pkt; - struct AVBSFContext *bsf; - - /** - * Properties (timestamps+side data) extracted from the last packet passed - * for decoding. - */ - AVPacket *last_pkt_props; - - /** - * temporary buffer used for encoders to store their bitstream - */ - uint8_t *byte_buffer; - unsigned int byte_buffer_size; - - /** - * This is set to AV_PKT_FLAG_KEY for encoders that encode intra-only - * formats (i.e. whose codec descriptor has AV_CODEC_PROP_INTRA_ONLY set). - * This is used to set said flag generically for said encoders. - */ - int intra_only_flag; - - void *frame_thread_encoder; - - /** - * The input frame is stored here for encoders implementing the simple - * encode API. - * - * Not allocated in other cases. - */ - AVFrame *in_frame; - - /** - * When the AV_CODEC_FLAG_RECON_FRAME flag is used. the encoder should store - * here the reconstructed frame corresponding to the last returned packet. - * - * Not allocated in other cases. - */ - AVFrame *recon_frame; - - /** - * If this is set, then FFCodec->close (if existing) needs to be called - * for the parent AVCodecContext. - */ - int needs_close; - - /** - * Number of audio samples to skip at the start of the next decoded frame - */ - int skip_samples; - - /** - * hwaccel-specific private data - */ - void *hwaccel_priv_data; - - /** - * checks API usage: after codec draining, flush is required to resume operation - */ - int draining; - - /** - * Temporary buffers for newly received or not yet output packets/frames. - */ - AVPacket *buffer_pkt; - AVFrame *buffer_frame; - int draining_done; - - int showed_multi_packet_warning; - - /* to prevent infinite loop on errors when draining */ - int nb_draining_errors; - - /* used when avctx flag AV_CODEC_FLAG_DROPCHANGED is set */ - int changed_frames_dropped; - int initial_format; - int initial_width, initial_height; - int initial_sample_rate; - AVChannelLayout initial_ch_layout; - -#if CONFIG_LCMS2 - FFIccContext icc; /* used to read and write embedded ICC profiles */ -#endif -} AVCodecInternal; - -/** - * Return the index into tab at which {a,b} match elements {[0],[1]} of tab. - * If there is no such matching pair then size is returned. - */ -int ff_match_2uint16(const uint16_t (*tab)[2], int size, int a, int b); - -unsigned int ff_toupper4(unsigned int x); - -void ff_color_frame(AVFrame *frame, const int color[4]); - -/** - * Maximum size in bytes of extradata. - * This value was chosen such that every bit of the buffer is - * addressable by a 32-bit signed integer as used by get_bits. - */ -#define FF_MAX_EXTRADATA_SIZE ((1 << 28) - AV_INPUT_BUFFER_PADDING_SIZE) - -/** - * 2^(x) for integer x - * @return correctly rounded float - */ -static av_always_inline float ff_exp2fi(int x) { - /* Normal range */ - if (-126 <= x && x <= 128) - return av_int2float((x+127) << 23); - /* Too large */ - else if (x > 128) - return INFINITY; - /* Subnormal numbers */ - else if (x > -150) - return av_int2float(1 << (x+149)); - /* Negligibly small */ - else - return 0; -} - -int avpriv_h264_has_num_reorder_frames(AVCodecContext *avctx); - -int avpriv_codec_get_cap_skip_frame_fill_param(const AVCodec *codec); - -/** - * Add a CPB properties side data to an encoding context. - */ -AVCPBProperties *ff_add_cpb_side_data(AVCodecContext *avctx); - -/** - * Check AVFrame for S12M timecode side data and allocate and fill TC SEI message with timecode info - * - * @param frame Raw frame to get S12M timecode side data from - * @param rate The frame rate - * @param prefix_len Number of bytes to allocate before SEI message - * @param data Pointer to a variable to store allocated memory - * Upon return the variable will hold NULL on error or if frame has no S12M timecode info. - * Otherwise it will point to prefix_len uninitialized bytes followed by - * *sei_size SEI message - * @param sei_size Pointer to a variable to store generated SEI message length - * @return Zero on success, negative error code on failure - */ -int ff_alloc_timecode_sei(const AVFrame *frame, AVRational rate, size_t prefix_len, - void **data, size_t *sei_size); - -/** - * Get an estimated video bitrate based on frame size, frame rate and coded - * bits per pixel. - */ -int64_t ff_guess_coded_bitrate(AVCodecContext *avctx); - -/** - * Check if a value is in the list. If not, return the default value - * - * @param ctx Context for the log msg - * @param val_name Name of the checked value, for log msg - * @param array_valid_values Array of valid int, ended with INT_MAX - * @param default_value Value return if checked value is not in the array - * @return Value or default_value. - */ -int ff_int_from_list_or_default(void *ctx, const char * val_name, int val, - const int * array_valid_values, int default_value); - -#endif /* AVCODEC_INTERNAL_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxvid.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxvid.c deleted file mode 100644 index aba875b6b859efc6a65e5c1d53fa1001d84ac811..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxvid.c +++ /dev/null @@ -1,914 +0,0 @@ -/* - * Interface to xvidcore for MPEG-4 encoding - * Copyright (c) 2004 Adam Thayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Interface to xvidcore for MPEG-4 compliant encoding. - * @author Adam Thayer (krevnik@comcast.net) - */ - -#include -#include -#include - -#include "libavutil/avassert.h" -#include "libavutil/file_open.h" -#include "libavutil/internal.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/mathematics.h" -#include "libavutil/mem.h" -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include "mpegutils.h" -#include "packet_internal.h" - -#if HAVE_UNISTD_H -#include -#endif - -#if HAVE_IO_H -#include -#endif - -/** - * Buffer management macros. - */ -#define BUFFER_SIZE 1024 -#define BUFFER_REMAINING(x) (BUFFER_SIZE - strlen(x)) -#define BUFFER_CAT(x) (&((x)[strlen(x)])) - -/** - * Structure for the private Xvid context. - * This stores all the private context for the codec. - */ -struct xvid_context { - AVClass *class; - void *encoder_handle; /**< Handle for Xvid encoder */ - int xsize; /**< Frame x size */ - int ysize; /**< Frame y size */ - int vop_flags; /**< VOP flags for Xvid encoder */ - int vol_flags; /**< VOL flags for Xvid encoder */ - int me_flags; /**< Motion Estimation flags */ - int qscale; /**< Do we use constant scale? */ - int quicktime_format; /**< Are we in a QT-based format? */ - char *twopassbuffer; /**< Character buffer for two-pass */ - char *old_twopassbuffer; /**< Old character buffer (two-pass) */ - char *twopassfile; /**< second pass temp file name */ - int twopassfd; - unsigned char *intra_matrix; /**< P-Frame Quant Matrix */ - unsigned char *inter_matrix; /**< I-Frame Quant Matrix */ - int lumi_aq; /**< Lumi masking as an aq method */ - int variance_aq; /**< Variance adaptive quantization */ - int ssim; /**< SSIM information display mode */ - int ssim_acc; /**< SSIM accuracy. 0: accurate. 4: fast. */ - int gmc; - int me_quality; /**< Motion estimation quality. 0: fast 6: best. */ - int mpeg_quant; /**< Quantization type. 0: H.263, 1: MPEG */ -}; - -/** - * Structure for the private first-pass plugin. - */ -struct xvid_ff_pass1 { - int version; /**< Xvid version */ - struct xvid_context *context; /**< Pointer to private context */ -}; - -static int xvid_encode_close(AVCodecContext *avctx); -static int xvid_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *picture, int *got_packet); - - -/* - * Xvid 2-Pass Kludge Section - * - * Xvid's default 2-pass doesn't allow us to create data as we need to, so - * this section spends time replacing the first pass plugin so we can write - * statistic information as libavcodec requests in. We have another kludge - * that allows us to pass data to the second pass in Xvid without a custom - * rate-control plugin. - */ - -/** - * Initialize the two-pass plugin and context. - * - * @param param Input construction parameter structure - * @param handle Private context handle - * @return Returns XVID_ERR_xxxx on failure, or 0 on success. - */ -static int xvid_ff_2pass_create(xvid_plg_create_t *param, void **handle) -{ - struct xvid_ff_pass1 *x = (struct xvid_ff_pass1 *) param->param; - char *log = x->context->twopassbuffer; - - /* Do a quick bounds check */ - if (!log) - return XVID_ERR_FAIL; - - /* We use snprintf() */ - /* This is because we can safely prevent a buffer overflow */ - log[0] = 0; - snprintf(log, BUFFER_REMAINING(log), - "# ffmpeg 2-pass log file, using xvid codec\n"); - snprintf(BUFFER_CAT(log), BUFFER_REMAINING(log), - "# Do not modify. libxvidcore version: %d.%d.%d\n\n", - XVID_VERSION_MAJOR(XVID_VERSION), - XVID_VERSION_MINOR(XVID_VERSION), - XVID_VERSION_PATCH(XVID_VERSION)); - - *handle = x->context; - return 0; -} - -/** - * Destroy the two-pass plugin context. - * - * @param ref Context pointer for the plugin - * @param param Destroy context - * @return Returns 0, success guaranteed - */ -static int xvid_ff_2pass_destroy(struct xvid_context *ref, - xvid_plg_destroy_t *param) -{ - /* Currently cannot think of anything to do on destruction */ - /* Still, the framework should be here for reference/use */ - if (ref->twopassbuffer) - ref->twopassbuffer[0] = 0; - return 0; -} - -/** - * Enable fast encode mode during the first pass. - * - * @param ref Context pointer for the plugin - * @param param Frame data - * @return Returns 0, success guaranteed - */ -static int xvid_ff_2pass_before(struct xvid_context *ref, - xvid_plg_data_t *param) -{ - int motion_remove; - int motion_replacements; - int vop_remove; - - /* Nothing to do here, result is changed too much */ - if (param->zone && param->zone->mode == XVID_ZONE_QUANT) - return 0; - - /* We can implement a 'turbo' first pass mode here */ - param->quant = 2; - - /* Init values */ - motion_remove = ~XVID_ME_CHROMA_PVOP & - ~XVID_ME_CHROMA_BVOP & - ~XVID_ME_EXTSEARCH16 & - ~XVID_ME_ADVANCEDDIAMOND16; - motion_replacements = XVID_ME_FAST_MODEINTERPOLATE | - XVID_ME_SKIP_DELTASEARCH | - XVID_ME_FASTREFINE16 | - XVID_ME_BFRAME_EARLYSTOP; - vop_remove = ~XVID_VOP_MODEDECISION_RD & - ~XVID_VOP_FAST_MODEDECISION_RD & - ~XVID_VOP_TRELLISQUANT & - ~XVID_VOP_INTER4V & - ~XVID_VOP_HQACPRED; - - param->vol_flags &= ~XVID_VOL_GMC; - param->vop_flags &= vop_remove; - param->motion_flags &= motion_remove; - param->motion_flags |= motion_replacements; - - return 0; -} - -/** - * Capture statistic data and write it during first pass. - * - * @param ref Context pointer for the plugin - * @param param Statistic data - * @return Returns XVID_ERR_xxxx on failure, or 0 on success - */ -static int xvid_ff_2pass_after(struct xvid_context *ref, - xvid_plg_data_t *param) -{ - char *log = ref->twopassbuffer; - const char *frame_types = " ipbs"; - char frame_type; - - /* Quick bounds check */ - if (!log) - return XVID_ERR_FAIL; - - /* Convert the type given to us into a character */ - if (param->type < 5 && param->type > 0) - frame_type = frame_types[param->type]; - else - return XVID_ERR_FAIL; - - snprintf(BUFFER_CAT(log), BUFFER_REMAINING(log), - "%c %d %d %d %d %d %d\n", - frame_type, param->stats.quant, param->stats.kblks, - param->stats.mblks, param->stats.ublks, - param->stats.length, param->stats.hlength); - - return 0; -} - -/** - * Dispatch function for our custom plugin. - * This handles the dispatch for the Xvid plugin. It passes data - * on to other functions for actual processing. - * - * @param ref Context pointer for the plugin - * @param cmd The task given for us to complete - * @param p1 First parameter (varies) - * @param p2 Second parameter (varies) - * @return Returns XVID_ERR_xxxx on failure, or 0 on success - */ -static int xvid_ff_2pass(void *ref, int cmd, void *p1, void *p2) -{ - switch (cmd) { - case XVID_PLG_INFO: - case XVID_PLG_FRAME: - return 0; - case XVID_PLG_BEFORE: - return xvid_ff_2pass_before(ref, p1); - case XVID_PLG_CREATE: - return xvid_ff_2pass_create(p1, p2); - case XVID_PLG_AFTER: - return xvid_ff_2pass_after(ref, p1); - case XVID_PLG_DESTROY: - return xvid_ff_2pass_destroy(ref, p1); - default: - return XVID_ERR_FAIL; - } -} - -/** - * Routine to create a global VO/VOL header for MP4 container. - * What we do here is extract the header from the Xvid bitstream - * as it is encoded. We also strip the repeated headers from the - * bitstream when a global header is requested for MPEG-4 ISO - * compliance. - * - * @param avctx AVCodecContext pointer to context - * @param frame Pointer to encoded frame data - * @param header_len Length of header to search - * @param frame_len Length of encoded frame data - * @return Returns new length of frame data - */ -static int xvid_strip_vol_header(AVCodecContext *avctx, AVPacket *pkt, - unsigned int header_len, - unsigned int frame_len) -{ - int vo_len = 0, i; - - for (i = 0; i < header_len - 3; i++) { - if (pkt->data[i] == 0x00 && - pkt->data[i + 1] == 0x00 && - pkt->data[i + 2] == 0x01 && - pkt->data[i + 3] == 0xB6) { - vo_len = i; - break; - } - } - - if (vo_len > 0) { - /* We need to store the header, so extract it */ - if (!avctx->extradata) { - avctx->extradata = av_malloc(vo_len); - if (!avctx->extradata) - return AVERROR(ENOMEM); - memcpy(avctx->extradata, pkt->data, vo_len); - avctx->extradata_size = vo_len; - } - /* Less dangerous now, memmove properly copies the two - * chunks of overlapping data */ - memmove(pkt->data, &pkt->data[vo_len], frame_len - vo_len); - pkt->size = frame_len - vo_len; - } - return 0; -} - -/** - * Routine to correct a possibly erroneous framerate being fed to us. - * Xvid currently chokes on framerates where the ticks per frame is - * extremely large. This function works to correct problems in this area - * by estimating a new framerate and taking the simpler fraction of - * the two presented. - * - * @param avctx Context that contains the framerate to correct. - */ -static void xvid_correct_framerate(AVCodecContext *avctx) -{ - int frate, fbase; - int est_frate, est_fbase; - int gcd; - float est_fps, fps; - - frate = avctx->time_base.den; - fbase = avctx->time_base.num; - - gcd = av_gcd(frate, fbase); - if (gcd > 1) { - frate /= gcd; - fbase /= gcd; - } - - if (frate <= 65000 && fbase <= 65000) { - avctx->time_base.den = frate; - avctx->time_base.num = fbase; - return; - } - - fps = (float) frate / (float) fbase; - est_fps = roundf(fps * 1000.0) / 1000.0; - - est_frate = (int) est_fps; - if (est_fps > (int) est_fps) { - est_frate = (est_frate + 1) * 1000; - est_fbase = (int) roundf((float) est_frate / est_fps); - } else - est_fbase = 1; - - gcd = av_gcd(est_frate, est_fbase); - if (gcd > 1) { - est_frate /= gcd; - est_fbase /= gcd; - } - - if (fbase > est_fbase) { - avctx->time_base.den = est_frate; - avctx->time_base.num = est_fbase; - av_log(avctx, AV_LOG_DEBUG, - "Xvid: framerate re-estimated: %.2f, %.3f%% correction\n", - est_fps, (((est_fps - fps) / fps) * 100.0)); - } else { - avctx->time_base.den = frate; - avctx->time_base.num = fbase; - } -} - -static av_cold int xvid_encode_init(AVCodecContext *avctx) -{ - int xerr, i, ret = -1; - int xvid_flags = avctx->flags; - struct xvid_context *x = avctx->priv_data; - uint16_t *intra, *inter; - int fd; - - xvid_plugin_single_t single = { 0 }; - struct xvid_ff_pass1 rc2pass1 = { 0 }; - xvid_plugin_2pass2_t rc2pass2 = { 0 }; - xvid_plugin_lumimasking_t masking_l = { 0 }; /* For lumi masking */ - xvid_plugin_lumimasking_t masking_v = { 0 }; /* For variance AQ */ - xvid_plugin_ssim_t ssim = { 0 }; - xvid_gbl_init_t xvid_gbl_init = { 0 }; - xvid_enc_create_t xvid_enc_create = { 0 }; - xvid_enc_plugin_t plugins[4]; - - x->twopassfd = -1; - - /* Bring in VOP flags from ffmpeg command-line */ - x->vop_flags = XVID_VOP_HALFPEL; /* Bare minimum quality */ - if (xvid_flags & AV_CODEC_FLAG_4MV) - x->vop_flags |= XVID_VOP_INTER4V; /* Level 3 */ - if (avctx->trellis) - x->vop_flags |= XVID_VOP_TRELLISQUANT; /* Level 5 */ - if (xvid_flags & AV_CODEC_FLAG_AC_PRED) - x->vop_flags |= XVID_VOP_HQACPRED; /* Level 6 */ - if (xvid_flags & AV_CODEC_FLAG_GRAY) - x->vop_flags |= XVID_VOP_GREYSCALE; - - /* Decide which ME quality setting to use */ - x->me_flags = 0; - switch (x->me_quality) { - case 6: - case 5: - x->me_flags |= XVID_ME_EXTSEARCH16 | - XVID_ME_EXTSEARCH8; - case 4: - case 3: - x->me_flags |= XVID_ME_ADVANCEDDIAMOND8 | - XVID_ME_HALFPELREFINE8 | - XVID_ME_CHROMA_PVOP | - XVID_ME_CHROMA_BVOP; - case 2: - case 1: - x->me_flags |= XVID_ME_ADVANCEDDIAMOND16 | - XVID_ME_HALFPELREFINE16; - } - - /* Decide how we should decide blocks */ - switch (avctx->mb_decision) { - case 2: - x->vop_flags |= XVID_VOP_MODEDECISION_RD; - x->me_flags |= XVID_ME_HALFPELREFINE8_RD | - XVID_ME_QUARTERPELREFINE8_RD | - XVID_ME_EXTSEARCH_RD | - XVID_ME_CHECKPREDICTION_RD; - case 1: - if (!(x->vop_flags & XVID_VOP_MODEDECISION_RD)) - x->vop_flags |= XVID_VOP_FAST_MODEDECISION_RD; - x->me_flags |= XVID_ME_HALFPELREFINE16_RD | - XVID_ME_QUARTERPELREFINE16_RD; - default: - break; - } - - /* Bring in VOL flags from ffmpeg command-line */ - x->vol_flags = 0; - if (x->gmc) { - x->vol_flags |= XVID_VOL_GMC; - x->me_flags |= XVID_ME_GME_REFINE; - } - if (xvid_flags & AV_CODEC_FLAG_QPEL) { - x->vol_flags |= XVID_VOL_QUARTERPEL; - x->me_flags |= XVID_ME_QUARTERPELREFINE16; - if (x->vop_flags & XVID_VOP_INTER4V) - x->me_flags |= XVID_ME_QUARTERPELREFINE8; - } - - xvid_gbl_init.version = XVID_VERSION; - xvid_gbl_init.debug = 0; - xvid_gbl_init.cpu_flags = 0; - - /* Initialize */ - xvid_global(NULL, XVID_GBL_INIT, &xvid_gbl_init, NULL); - - /* Create the encoder reference */ - xvid_enc_create.version = XVID_VERSION; - - /* Store the desired frame size */ - xvid_enc_create.width = - x->xsize = avctx->width; - xvid_enc_create.height = - x->ysize = avctx->height; - - /* Xvid can determine the proper profile to use */ - /* xvid_enc_create.profile = XVID_PROFILE_S_L3; */ - - /* We don't use zones */ - xvid_enc_create.zones = NULL; - xvid_enc_create.num_zones = 0; - - xvid_enc_create.num_threads = avctx->thread_count; -#if (XVID_VERSION <= 0x010303) && (XVID_VERSION >= 0x010300) - /* workaround for a bug in libxvidcore */ - if (avctx->height <= 16) { - if (avctx->thread_count < 2) { - xvid_enc_create.num_threads = 0; - } else { - av_log(avctx, AV_LOG_ERROR, - "Too small height for threads > 1."); - return AVERROR(EINVAL); - } - } -#endif - - xvid_enc_create.plugins = plugins; - xvid_enc_create.num_plugins = 0; - - /* Initialize Buffers */ - x->twopassbuffer = NULL; - x->old_twopassbuffer = NULL; - x->twopassfile = NULL; - - if (xvid_flags & AV_CODEC_FLAG_PASS1) { - rc2pass1.version = XVID_VERSION; - rc2pass1.context = x; - x->twopassbuffer = av_malloc(BUFFER_SIZE); - x->old_twopassbuffer = av_malloc(BUFFER_SIZE); - if (!x->twopassbuffer || !x->old_twopassbuffer) { - av_log(avctx, AV_LOG_ERROR, - "Xvid: Cannot allocate 2-pass log buffers\n"); - return AVERROR(ENOMEM); - } - x->twopassbuffer[0] = - x->old_twopassbuffer[0] = 0; - - plugins[xvid_enc_create.num_plugins].func = xvid_ff_2pass; - plugins[xvid_enc_create.num_plugins].param = &rc2pass1; - xvid_enc_create.num_plugins++; - } else if (xvid_flags & AV_CODEC_FLAG_PASS2) { - rc2pass2.version = XVID_VERSION; - rc2pass2.bitrate = avctx->bit_rate; - - fd = avpriv_tempfile("xvidff.", &x->twopassfile, 0, avctx); - if (fd < 0) { - av_log(avctx, AV_LOG_ERROR, "Xvid: Cannot write 2-pass pipe\n"); - return fd; - } - x->twopassfd = fd; - - if (!avctx->stats_in) { - av_log(avctx, AV_LOG_ERROR, - "Xvid: No 2-pass information loaded for second pass\n"); - return AVERROR(EINVAL); - } - - ret = write(fd, avctx->stats_in, strlen(avctx->stats_in)); - if (ret == -1) - ret = AVERROR(errno); - else if (strlen(avctx->stats_in) > ret) { - av_log(avctx, AV_LOG_ERROR, "Xvid: Cannot write to 2-pass pipe\n"); - ret = AVERROR(EIO); - } - if (ret < 0) - return ret; - - rc2pass2.filename = x->twopassfile; - plugins[xvid_enc_create.num_plugins].func = xvid_plugin_2pass2; - plugins[xvid_enc_create.num_plugins].param = &rc2pass2; - xvid_enc_create.num_plugins++; - } else if (!(xvid_flags & AV_CODEC_FLAG_QSCALE)) { - /* Single Pass Bitrate Control! */ - single.version = XVID_VERSION; - single.bitrate = avctx->bit_rate; - - plugins[xvid_enc_create.num_plugins].func = xvid_plugin_single; - plugins[xvid_enc_create.num_plugins].param = &single; - xvid_enc_create.num_plugins++; - } - - if (avctx->lumi_masking != 0.0) - x->lumi_aq = 1; - - /* Luminance Masking */ - if (x->lumi_aq) { - masking_l.method = 0; - plugins[xvid_enc_create.num_plugins].func = xvid_plugin_lumimasking; - - /* The old behavior is that when avctx->lumi_masking is specified, - * plugins[...].param = NULL. Trying to keep the old behavior here. */ - plugins[xvid_enc_create.num_plugins].param = - avctx->lumi_masking ? NULL : &masking_l; - xvid_enc_create.num_plugins++; - } - - /* Variance AQ */ - if (x->variance_aq) { - masking_v.method = 1; - plugins[xvid_enc_create.num_plugins].func = xvid_plugin_lumimasking; - plugins[xvid_enc_create.num_plugins].param = &masking_v; - xvid_enc_create.num_plugins++; - } - - if (x->lumi_aq && x->variance_aq ) - av_log(avctx, AV_LOG_INFO, - "Both lumi_aq and variance_aq are enabled. The resulting quality" - "will be the worse one of the two effects made by the AQ.\n"); - - /* SSIM */ - if (x->ssim) { - plugins[xvid_enc_create.num_plugins].func = xvid_plugin_ssim; - ssim.b_printstat = x->ssim == 2; - ssim.acc = x->ssim_acc; - ssim.cpu_flags = xvid_gbl_init.cpu_flags; - ssim.b_visualize = 0; - plugins[xvid_enc_create.num_plugins].param = &ssim; - xvid_enc_create.num_plugins++; - } - - /* Frame Rate and Key Frames */ - xvid_correct_framerate(avctx); - xvid_enc_create.fincr = avctx->time_base.num; - xvid_enc_create.fbase = avctx->time_base.den; - if (avctx->gop_size > 0) - xvid_enc_create.max_key_interval = avctx->gop_size; - else - xvid_enc_create.max_key_interval = 240; /* Xvid's best default */ - - /* Quants */ - if (xvid_flags & AV_CODEC_FLAG_QSCALE) - x->qscale = 1; - else - x->qscale = 0; - - xvid_enc_create.min_quant[0] = avctx->qmin; - xvid_enc_create.min_quant[1] = avctx->qmin; - xvid_enc_create.min_quant[2] = avctx->qmin; - xvid_enc_create.max_quant[0] = avctx->qmax; - xvid_enc_create.max_quant[1] = avctx->qmax; - xvid_enc_create.max_quant[2] = avctx->qmax; - - /* Quant Matrices */ - x->intra_matrix = - x->inter_matrix = NULL; - - if (x->mpeg_quant) - x->vol_flags |= XVID_VOL_MPEGQUANT; - if ((avctx->intra_matrix || avctx->inter_matrix)) { - x->vol_flags |= XVID_VOL_MPEGQUANT; - - if (avctx->intra_matrix) { - intra = avctx->intra_matrix; - x->intra_matrix = av_malloc(sizeof(unsigned char) * 64); - if (!x->intra_matrix) - return AVERROR(ENOMEM); - } else - intra = NULL; - if (avctx->inter_matrix) { - inter = avctx->inter_matrix; - x->inter_matrix = av_malloc(sizeof(unsigned char) * 64); - if (!x->inter_matrix) - return AVERROR(ENOMEM); - } else - inter = NULL; - - for (i = 0; i < 64; i++) { - if (intra) - x->intra_matrix[i] = (unsigned char) intra[i]; - if (inter) - x->inter_matrix[i] = (unsigned char) inter[i]; - } - } - - /* Misc Settings */ - xvid_enc_create.frame_drop_ratio = 0; - xvid_enc_create.global = 0; - if (xvid_flags & AV_CODEC_FLAG_CLOSED_GOP) - xvid_enc_create.global |= XVID_GLOBAL_CLOSED_GOP; - - /* Determines which codec mode we are operating in */ - avctx->extradata = NULL; - avctx->extradata_size = 0; - if (xvid_flags & AV_CODEC_FLAG_GLOBAL_HEADER) { - /* In this case, we are claiming to be MPEG-4 */ - x->quicktime_format = 1; - } else { - /* We are claiming to be Xvid */ - x->quicktime_format = 0; - if (!avctx->codec_tag) - avctx->codec_tag = AV_RL32("xvid"); - } - - /* Bframes */ - xvid_enc_create.max_bframes = avctx->max_b_frames; - xvid_enc_create.bquant_offset = 100 * avctx->b_quant_offset; - xvid_enc_create.bquant_ratio = 100 * avctx->b_quant_factor; - if (avctx->max_b_frames > 0 && !x->quicktime_format) - xvid_enc_create.global |= XVID_GLOBAL_PACKED; - - av_assert0(xvid_enc_create.num_plugins + (!!x->ssim) + (!!x->variance_aq) + (!!x->lumi_aq) <= FF_ARRAY_ELEMS(plugins)); - - /* Encode a dummy frame to get the extradata immediately */ - if (x->quicktime_format) { - AVFrame *picture; - AVPacket *packet; - int size, got_packet; - - packet = av_packet_alloc(); - if (!packet) - return AVERROR(ENOMEM); - - picture = av_frame_alloc(); - if (!picture) { - av_packet_free(&packet); - return AVERROR(ENOMEM); - } - - xerr = xvid_encore(NULL, XVID_ENC_CREATE, &xvid_enc_create, NULL); - if( xerr ) { - av_packet_free(&packet); - av_frame_free(&picture); - av_log(avctx, AV_LOG_ERROR, "Xvid: Could not create encoder reference\n"); - return AVERROR_EXTERNAL; - } - x->encoder_handle = xvid_enc_create.handle; - size = ((avctx->width + 1) & ~1) * ((avctx->height + 1) & ~1); - picture->data[0] = av_malloc(size + size / 2); - if (!picture->data[0]) { - av_packet_free(&packet); - av_frame_free(&picture); - return AVERROR(ENOMEM); - } - picture->data[1] = picture->data[0] + size; - picture->data[2] = picture->data[1] + size / 4; - memset(picture->data[0], 0, size); - memset(picture->data[1], 128, size / 2); - xvid_encode_frame(avctx, packet, picture, &got_packet); - av_packet_free(&packet); - av_free(picture->data[0]); - av_frame_free(&picture); - xvid_encore(x->encoder_handle, XVID_ENC_DESTROY, NULL, NULL); - } - - /* Create encoder context */ - xerr = xvid_encore(NULL, XVID_ENC_CREATE, &xvid_enc_create, NULL); - if (xerr) { - av_log(avctx, AV_LOG_ERROR, "Xvid: Could not create encoder reference\n"); - return AVERROR_EXTERNAL; - } - - x->encoder_handle = xvid_enc_create.handle; - - return 0; -} - -static int xvid_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *picture, int *got_packet) -{ - int xerr, i, ret; - struct xvid_context *x = avctx->priv_data; - int mb_width = (avctx->width + 15) / 16; - int mb_height = (avctx->height + 15) / 16; - char *tmp; - - xvid_enc_frame_t xvid_enc_frame = { 0 }; - xvid_enc_stats_t xvid_enc_stats = { 0 }; - - if ((ret = ff_alloc_packet(avctx, pkt, mb_width*(int64_t)mb_height*MAX_MB_BYTES + AV_INPUT_BUFFER_MIN_SIZE)) < 0) - return ret; - - /* Start setting up the frame */ - xvid_enc_frame.version = XVID_VERSION; - xvid_enc_stats.version = XVID_VERSION; - - /* Let Xvid know where to put the frame. */ - xvid_enc_frame.bitstream = pkt->data; - xvid_enc_frame.length = pkt->size; - - /* Initialize input image fields */ - if (avctx->pix_fmt != AV_PIX_FMT_YUV420P) { - av_log(avctx, AV_LOG_ERROR, - "Xvid: Color spaces other than 420P not supported\n"); - return AVERROR(EINVAL); - } - - xvid_enc_frame.input.csp = XVID_CSP_PLANAR; /* YUV420P */ - - for (i = 0; i < 4; i++) { - xvid_enc_frame.input.plane[i] = picture->data[i]; - xvid_enc_frame.input.stride[i] = picture->linesize[i]; - } - - /* Encoder Flags */ - xvid_enc_frame.vop_flags = x->vop_flags; - xvid_enc_frame.vol_flags = x->vol_flags; - xvid_enc_frame.motion = x->me_flags; - xvid_enc_frame.type = - picture->pict_type == AV_PICTURE_TYPE_I ? XVID_TYPE_IVOP : - picture->pict_type == AV_PICTURE_TYPE_P ? XVID_TYPE_PVOP : - picture->pict_type == AV_PICTURE_TYPE_B ? XVID_TYPE_BVOP : - XVID_TYPE_AUTO; - - /* Pixel aspect ratio setting */ - if (avctx->sample_aspect_ratio.num < 0 || avctx->sample_aspect_ratio.num > 255 || - avctx->sample_aspect_ratio.den < 0 || avctx->sample_aspect_ratio.den > 255) { - av_log(avctx, AV_LOG_WARNING, - "Invalid pixel aspect ratio %i/%i, limit is 255/255 reducing\n", - avctx->sample_aspect_ratio.num, avctx->sample_aspect_ratio.den); - av_reduce(&avctx->sample_aspect_ratio.num, &avctx->sample_aspect_ratio.den, - avctx->sample_aspect_ratio.num, avctx->sample_aspect_ratio.den, 255); - } - xvid_enc_frame.par = XVID_PAR_EXT; - xvid_enc_frame.par_width = avctx->sample_aspect_ratio.num; - xvid_enc_frame.par_height = avctx->sample_aspect_ratio.den; - - /* Quant Setting */ - if (x->qscale) - xvid_enc_frame.quant = picture->quality / FF_QP2LAMBDA; - else - xvid_enc_frame.quant = 0; - - /* Matrices */ - xvid_enc_frame.quant_intra_matrix = x->intra_matrix; - xvid_enc_frame.quant_inter_matrix = x->inter_matrix; - - /* Encode */ - xerr = xvid_encore(x->encoder_handle, XVID_ENC_ENCODE, - &xvid_enc_frame, &xvid_enc_stats); - - /* Two-pass log buffer swapping */ - avctx->stats_out = NULL; - if (x->twopassbuffer) { - tmp = x->old_twopassbuffer; - x->old_twopassbuffer = x->twopassbuffer; - x->twopassbuffer = tmp; - x->twopassbuffer[0] = 0; - if (x->old_twopassbuffer[0] != 0) { - avctx->stats_out = x->old_twopassbuffer; - } - } - - if (xerr > 0) { - int pict_type; - - *got_packet = 1; - - if (xvid_enc_stats.type == XVID_TYPE_PVOP) - pict_type = AV_PICTURE_TYPE_P; - else if (xvid_enc_stats.type == XVID_TYPE_BVOP) - pict_type = AV_PICTURE_TYPE_B; - else if (xvid_enc_stats.type == XVID_TYPE_SVOP) - pict_type = AV_PICTURE_TYPE_S; - else - pict_type = AV_PICTURE_TYPE_I; - - ff_side_data_set_encoder_stats(pkt, xvid_enc_stats.quant * FF_QP2LAMBDA, NULL, 0, pict_type); - - if (xvid_enc_frame.out_flags & XVID_KEYFRAME) { - pkt->flags |= AV_PKT_FLAG_KEY; - if (x->quicktime_format) - return xvid_strip_vol_header(avctx, pkt, - xvid_enc_stats.hlength, xerr); - } - - pkt->size = xerr; - - return 0; - } else { - if (!xerr) - return 0; - av_log(avctx, AV_LOG_ERROR, - "Xvid: Encoding Error Occurred: %i\n", xerr); - return AVERROR_EXTERNAL; - } -} - -static av_cold int xvid_encode_close(AVCodecContext *avctx) -{ - struct xvid_context *x = avctx->priv_data; - - if (x->encoder_handle) { - xvid_encore(x->encoder_handle, XVID_ENC_DESTROY, NULL, NULL); - x->encoder_handle = NULL; - } - - if (x->twopassbuffer) { - av_freep(&x->twopassbuffer); - av_freep(&x->old_twopassbuffer); - avctx->stats_out = NULL; - } - if (x->twopassfd>=0) { - unlink(x->twopassfile); - close(x->twopassfd); - x->twopassfd = -1; - } - av_freep(&x->twopassfile); - av_freep(&x->intra_matrix); - av_freep(&x->inter_matrix); - - return 0; -} - -#define OFFSET(x) offsetof(struct xvid_context, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "lumi_aq", "Luminance masking AQ", OFFSET(lumi_aq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { "variance_aq", "Variance AQ", OFFSET(variance_aq), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { "ssim", "Show SSIM information to stdout", OFFSET(ssim), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 2, VE, "ssim" }, - { "off", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 0 }, INT_MIN, INT_MAX, VE, "ssim" }, - { "avg", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 1 }, INT_MIN, INT_MAX, VE, "ssim" }, - { "frame", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = 2 }, INT_MIN, INT_MAX, VE, "ssim" }, - { "ssim_acc", "SSIM accuracy", OFFSET(ssim_acc), AV_OPT_TYPE_INT, { .i64 = 2 }, 0, 4, VE }, - { "gmc", "use GMC", OFFSET(gmc), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { "me_quality", "Motion estimation quality", OFFSET(me_quality), AV_OPT_TYPE_INT, { .i64 = 4 }, 0, 6, VE }, - { "mpeg_quant", "Use MPEG quantizers instead of H.263", OFFSET(mpeg_quant), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { NULL }, -}; - -static const AVClass xvid_class = { - .class_name = "libxvid", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_libxvid_encoder = { - .p.name = "libxvid", - CODEC_LONG_NAME("libxvidcore MPEG-4 part 2"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_MPEG4, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(struct xvid_context), - .init = xvid_encode_init, - FF_CODEC_ENCODE_CB(xvid_encode_frame), - .close = xvid_encode_close, - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE }, - .p.priv_class = &xvid_class, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, - .p.wrapper_name = "libxvid", -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Animal Revolt Battle Simulator 1.1.8 Mod APK Create Your Own Custom Animals and Scenarios.md b/spaces/congsaPfin/Manga-OCR/logs/Animal Revolt Battle Simulator 1.1.8 Mod APK Create Your Own Custom Animals and Scenarios.md deleted file mode 100644 index cfda0cd6d356b703491c5f29ac2b9576c0cd32f7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Animal Revolt Battle Simulator 1.1.8 Mod APK Create Your Own Custom Animals and Scenarios.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -

    Animal Revolt Battle Simulator 1.1 8 Mod Apk: A Fun and Crazy Game for Animal Lovers

    -

    If you are a fan of animals, battles, and simulations, then you will love Animal Revolt Battle Simulator, a game that lets you create and watch epic animal fights in a realistic physics-based environment. In this game, you can choose from hundreds of different animals, from lions and tigers to dinosaurs and dragons, and pit them against each other in various scenarios and maps. You can also customize your animals with weapons, armor, and accessories, and watch them fight in slow motion or fast forward. Whether you want to see a gorilla vs a crocodile, a shark vs a bear, or a unicorn vs a dragon, you can make it happen in Animal Revolt Battle Simulator.

    -

    But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money, summon any animal you want, and access all the features and content of the game? Well, that's where Animal Revolt Battle Simulator 1.1 8 Mod Apk comes in handy. This is a modified version of the game that gives you all the benefits and advantages that you need to have more fun and excitement in your animal battles. In this article, we will tell you everything you need to know about Animal Revolt Battle Simulator 1.1 8 Mod Apk, including what it is, why you should download it, how to download and install it, how to play it, and some tips and tricks for playing it better.

    -

    animal revolt battle simulator 1.1 8 mod apk


    Download Filehttps://urlca.com/2uOcqh



    -

    What is Animal Revolt Battle Simulator?

    -

    Animal Revolt Battle Simulator is a game developed by Beast Battle Games, a studio that specializes in creating animal simulation games. The game was released in June 2020 for Windows PC, Android, and iOS devices. The game has received positive reviews from players and critics alike, who praised its realistic graphics, physics-based gameplay, variety of animals, customization options, and sandbox mode.

    -

    The game is inspired by other popular battle simulator games like Totally Accurate Battle Simulator (TABS) and Ultimate Epic Battle Simulator (UEBS), but with a twist: instead of using humans or fantasy creatures as combatants, it uses animals. The game allows you to create your own animal army by selecting from over 200 different animals, each with their own stats, abilities, behaviors, and sounds. You can also equip your animals with various weapons, armor, and accessories to make them stronger or more unique. You can then place your animals on different maps and scenarios, such as forests, deserts, islands, castles, arenas, etc., and watch them fight against each other or against other players online.

    -

    The game also has a sandbox mode where you can experiment with different combinations of animals and settings without any rules or objectives. You can also adjust the speed of the simulation, from slow motion to fast forward, and use different camera angles to view the action from different perspectives. You can also record your battles and share them with other players online.

    -

    Why Download Animal Revolt Battle Simulator 1.1 8 Mod Apk?

    -

    Animal Revolt Battle Simulator is undoubtedly a fun and crazy game that will keep you entertained for hours. However, the game also has some limitations and drawbacks that might affect your gaming experience. For example, the game requires a lot of storage space and memory to run smoothly, which might not be available on some devices. The game also has some in-app purchases and ads that might annoy you or hinder your progress. Moreover, the game might not have all the features and content that you want, such as more animals, maps, weapons, etc. That's why downloading Animal Revolt Battle Simulator 1.1 8 Mod Apk is a great idea if you want to enjoy the game to the fullest. This is a modified version of the game that gives you several benefits and advantages that you won't find in the original version. Some of these benefits and advantages are: - Unlimited money: You will have unlimited money to buy any animal, weapon, armor, or accessory that you want. You won't have to worry about running out of money or saving up for something expensive. You can also unlock all the premium features and content of the game without spending a dime. - All animals unlocked: You will have access to all the animals in the game, from the common ones like dogs and cats to the rare ones like dinosaurs and dragons. You won't have to wait for them to be unlocked or complete any tasks or challenges to get them. You can also summon any animal you want at any time, regardless of the map or scenario. - No ads: You will not see any ads in the game that might interrupt your gameplay or distract you from the action. You won't have to watch any videos or click on any banners to get rewards or bonuses. You can enjoy the game without any interruptions or annoyances. - No root required: You will not need to root your device to install or run the modded version of the game. You won't have to risk damaging your device or voiding your warranty by rooting it. You can install and play the modded version of the game safely and easily.

    How to Download and Install Animal Revolt Battle Simulator 1.1 8 Mod Apk?

    -

    Downloading and installing Animal Revolt Battle Simulator 1.1 8 Mod Apk is very simple and straightforward. You just need to follow these steps:

    -
      -
    1. Click on this link to download the modded version of the game.
    2. -
    3. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
    4. -
    5. Locate the downloaded file in your device's file manager and tap on it to start the installation process.
    6. -
    7. Follow the instructions on the screen and wait for the installation to finish.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    Note: If you already have the original version of the game installed on your device, you need to uninstall it first before installing the modded version. Otherwise, you might encounter some errors or conflicts.

    -

    How to Play Animal Revolt Battle Simulator 1.1 8 Mod Apk?

    -

    Playing Animal Revolt Battle Simulator 1.1 8 Mod Apk is very easy and fun. You just need to follow these steps:

    -
      -
    1. Choose a mode: You can choose between campaign mode, sandbox mode, or online mode. Campaign mode lets you play through different levels with different objectives and challenges. Sandbox mode lets you create your own scenarios and battles with no rules or limitations. Online mode lets you play against other players online in real-time.
    2. -
    3. Choose a map: You can choose from different maps with different terrains, environments, and obstacles. Some maps are more suitable for certain animals than others, so choose wisely.
    4. -
    5. Choose your animals: You can choose from over 200 different animals, each with their own stats, abilities, behaviors, and sounds. You can also customize your animals with various weapons, armor, and accessories to make them stronger or more unique.
    6. -
    7. Place your animals: You can place your animals on different spots on the map by dragging and dropping them. You can also rotate them or resize them by pinching them. You can also adjust their formation and alignment by tapping on them.
    8. -
    9. Start the battle: Once you are satisfied with your animal army, you can start the battle by tapping on the play button. You can also pause, resume, or restart the battle by tapping on the pause button.
    10. -
    11. Watch the battle: You can watch the battle unfold in a realistic physics-based environment. You can also adjust the speed of the simulation, from slow motion to fast forward, and use different camera angles to view the action from different perspectives.
    12. -
    -

    Tips and Tricks for Animal Revolt Battle Simulator 1.1 8 Mod Apk

    -

    To help you play Animal Revolt Battle Simulator 1.1 8 Mod Apk better and have more fun, here are some tips and tricks that you can use:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    TipExplanation
    Use the right animals for the right mapsSome animals are more suited for certain maps than others. For example, aquatic animals like sharks and whales are better for maps with water, while flying animals like eagles and dragons are better for maps with high altitudes. Try to match your animals with the maps that suit them best.
    Balance your animal armyDon't just use one type of animal for your army. Try to balance your animal army with different types of animals, such as melee, ranged, tank, support, etc. This way, you can have a more versatile and effective army that can deal with different situations and enemies.
    Experiment with different combinations of animals and weaponsDon't be afraid to experiment with different combinations of animals and weapons. You might discover some surprising and hilarious results that you didn't expect. For example, you can try to equip a chicken with a rocket launcher, or a snake with a sword. The possibilities are endless.
    Use the sandbox mode to test your ideasIf you want to test your ideas or try something new without any consequences, you can use the sandbox mode. This mode lets you create your own scenarios and battles with no rules or limitations. You can also adjust the speed of the simulation, from slow motion to fast forward, and use different camera angles to view the action from different perspectives.
    Play online mode to challenge other playersIf you want to challenge yourself and compete with other players online, you can play online mode. This mode lets you play against other players online in real-time. You can also chat with them and share your battles with them. You can also see the rankings and leaderboards of other players and try to beat their scores.
    -

    Conclusion

    -

    Animal Revolt Battle Simulator is a fun and crazy game that lets you create and watch epic animal fights in a realistic physics-based environment. You can choose from hundreds of different animals, from lions and tigers to dinosaurs and dragons, and pit them against each other in various scenarios and maps. You can also customize your animals with weapons, armor, and accessories, and watch them fight in slow motion or fast forward.

    -

    animal revolt battle simulator mod apk unlimited money
    -animal revolt battle simulator hack apk download
    -animal revolt battle simulator mod apk latest version
    -animal revolt battle simulator mod apk free download
    -animal revolt battle simulator mod apk android 1
    -animal revolt battle simulator mod apk revdl
    -animal revolt battle simulator mod apk 2023
    -animal revolt battle simulator mod apk rexdl
    -animal revolt battle simulator mod apk happymod
    -animal revolt battle simulator mod apk unlimited summon
    -animal revolt battle simulator premium mod apk
    -animal revolt battle simulator pro mod apk
    -animal revolt battle simulator full mod apk
    -animal revolt battle simulator mega mod apk
    -animal revolt battle simulator vip mod apk
    -animal revolt battle simulator cracked apk
    -animal revolt battle simulator cheat apk
    -animal revolt battle simulator unlocked apk
    -animal revolt battle simulator no ads apk
    -animal revolt battle simulator all animals unlocked apk
    -animal revolt battle simulator 3.0.5 mod apk
    -animal revolt battle simulator 2.9.9 mod apk
    -animal revolt battle simulator 2.9.8 mod apk
    -animal revolt battle simulator 2.9.7 mod apk
    -animal revolt battle simulator 2.9.6 mod apk
    -animal revolt battle simulator 2.9.5 mod apk
    -animal revolt battle simulator 2.9.4 mod apk
    -animal revolt battle simulator 2.9.3 mod apk
    -animal revolt battle simulator 2.9.2 mod apk
    -animal revolt battle simulator 2.9.1 mod apk
    -download game animal revolt battle simulator mod apk
    -download game android animal revolt battle simulator mod apk
    -download game offline animal revolt battle simulator mod apk
    -download game online animal revolt battle simulator mod apk
    -download game gratis animal revolt battle simulator mod apk
    -download game terbaru animal revolt battle simulator mod apk
    -download game terbaik animal revolt battle simulator mod apk
    -download game seru animal revolt battle simulator mod apk
    -download game lucu animal revolt battle simulator mod apk
    -download game keren animal revolt battle simulator mod apk
    -cara download game animal revolt battle simulator mod apk
    -cara instal game animal revolt battle simulator mod apk
    -cara main game animal revolt battle simulator mod apk
    -cara cheat game animal revolt battle simulator mod apk
    -cara update game animal revolt battle simulator mod apk
    -tips and tricks for playing animal revolt battle simulator mod apk
    -best settings for playing animal revolt battle simulator mod apk
    -best animals to use in animal revolt battle simulator mod apk
    -best strategies for winning in animal revolt battle simulator mod apk

    -

    If you want to enjoy the game without any limitations or restrictions, you should download Animal Revolt Battle Simulator 1.1 8 Mod Apk. This is a modified version of the game that gives you unlimited money, all animals unlocked, no ads, no root required, and more. You can download and install it easily by following the steps in this article.

    -

    So what are you waiting for? Download Animal Revolt Battle Simulator 1.1 8 Mod Apk now and unleash your inner animal lover!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Animal Revolt Battle Simulator 1.1 8 Mod Apk:

    -
      -
    1. Is Animal Revolt Battle Simulator 1.1 8 Mod Apk safe to download and install?
    2. -

      Yes, Animal Revolt Battle Simulator 1.1 8 Mod Apk is safe to download and install. It does not contain any viruses or malware that might harm your device or data. It also does not require root access to run, so you don't have to worry about damaging your device or voiding your warranty by rooting it.

      -
    3. Is Animal Revolt Battle Simulator 1.1 8 Mod Apk compatible with my device?
    4. -

      Animal Revolt Battle Simulator 1.1 8 Mod Apk is compatible with most Android devices that run on Android 4.4 or higher. However, some devices might not be able to run the game smoothly due to their low specifications or performance issues. If you encounter any problems while playing the game, you can try lowering the graphics settings or closing other apps running in the background.

      -
    5. How do I update Animal Revolt Battle Simulator 1.1 8 Mod Apk?
    6. -

      To update Animal Revolt Battle Simulator 1.1 8 Mod Apk, you need to download the latest version of the modded version of the game from this link . Then, you need to uninstall the previous version of the modded version of the game from your device and install the new version following the same steps as before.

      -
    7. How do I uninstall Animal Revolt Battle Simulator 1.1 8 Mod Apk?
    8. -

      To uninstall Animal Revolt Battle Simulator 1.1 8 Mod Apk, you need to go to your device's settings and find the app in the list of installed apps. Then, you need to tap on the app and select the uninstall option. You can also uninstall the app by long-pressing on its icon and dragging it to the trash bin.

      -
    9. Where can I find more information about Animal Revolt Battle Simulator 1.1 8 Mod Apk?
    10. -

      If you want to find more information about Animal Revolt Battle Simulator 1.1 8 Mod Apk, you can visit the official website of the game developer, Beast Battle Games, or their social media pages on Facebook, Twitter, Instagram, and YouTube. You can also visit the official subreddit of the game, r/AnimalRevoltBattleSim, or the official Discord server of the game, Animal Revolt Battle Simulator. You can also read reviews and ratings from other players on Google Play Store or App Store.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Murottal Anak Perempuan Juz 30 Gratis dan Tanpa Iklan.md b/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Murottal Anak Perempuan Juz 30 Gratis dan Tanpa Iklan.md deleted file mode 100644 index 07913858f2598c4b613a9b35cec937e589819679..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Murottal Anak Perempuan Juz 30 Gratis dan Tanpa Iklan.md +++ /dev/null @@ -1,64 +0,0 @@ -
    -

    How to Download MP3 Murottal Anak Perempuan Juz 30

    -

    Murottal is a melodic style of Quran recitation that is known for its beauty and soothing effect. Juz 30 is the last part of the Quran, containing 37 short chapters that are easy to memorize and recite. Many Muslims listen to murottal for spiritual and psychological benefits, such as reducing stress, improving memory, and enhancing mood.

    -

    In this article, we will explain what are the benefits of listening to murottal, and how to download mp3 murottal anak perempuan juz 30 from various sources. Whether you want to listen to it yourself, or share it with your children, friends, or family, you will find this article useful and informative.

    -

    download mp3 murottal anak perempuan juz 30


    Download Filehttps://urlca.com/2uO8rF



    -

    Benefits of Listening to Murottal

    -

    Listening to murottal has many benefits for the brain, the body, and the soul. Here are some of them:

    -
      -
    • It stimulates the brain. Studies have shown that listening to murottal can activate the alpha waves in the brain, which are associated with relaxation, creativity, and learning. Listening to murottal can also improve memory performance and cognitive function.
    • -
    • It reduces stress. Listening to murottal can lower the levels of cortisol, the stress hormone, in the body. It can also calm the nervous system and regulate the heart rate and blood pressure. Listening to murottal can help you cope with anxiety, depression, and insomnia.
    • -
    • It enhances mood. Listening to murottal can increase the levels of dopamine, serotonin, and endorphins in the brain, which are neurotransmitters that regulate mood, happiness, and motivation. Listening to murottal can make you feel more positive, optimistic, and grateful.
    • -
    • It nourishes the soul. Listening to murottal can connect you with God and His words. It can increase your faith, devotion, and understanding of Islam. It can also inspire you to do good deeds, repent from sins, and seek forgiveness. Listening to murottal can bring you peace, comfort, and guidance.
    • -
    -

    How to Download MP3 Murottal Anak Perempuan Juz 30

    -

    If you want to download mp3 murottal anak perempuan juz 30, you can follow these simple steps:

    -
      -
    1. Choose a source. There are many websites and apps that offer mp3 files of murottal from different reciters. Some of them are QuranicAudio.com , Quran Central , Internet Archive , and many others. You can search for them online or on your app store.
    2. -
    3. Select a reciter. Each reciter has a unique voice and style of recitation. You can choose one that suits your preference and taste. Some of the popular reciters are Mishary Rashid Al-Afasy, Abdul Rahman Al-Sudais, Maher Al-Muaiqly, Saad Al-Ghamdi, and many others. You can also choose female reciters, such as Maria Ulfa, Samia Mubarak, Madinah Javed, and many others.
    4. -
    5. Download the mp3 file. Once you have selected a source and a reciter, you can download the mp3 file of murottal anak perempuan juz 30 by clicking on the download button or link. You can also listen to it online before downloading it. You can save the file on your device or transfer it to another device.
    6. -
    7. Enjoy the murottal. After downloading the mp3 file, you can enjoy listening to the murottal anytime and anywhere. You can use headphones, speakers, or any other device that can play mp3 files. You can also follow along with the Arabic text or the translation of the Quran. You can listen to it for yourself or share it with others.
    8. -
    -

    Conclusion

    -

    Murottal is a beautiful and beneficial way of reciting and listening to the Quran. Juz 30 is the last part of the Quran that contains many short and powerful chapters. Listening to murottal anak perempuan juz 30 can bring you many benefits for your brain, your body, and your soul. You can download mp3 murottal anak perempuan juz 30 from various sources and reciters by following some simple steps. We hope this article has helped you learn more about murottal and how to download it. May Allah bless you and accept your efforts.

    -

    FAQs

    -

    Here are some common questions and answers about murottal:

    -
      -
    • What is the difference between murottal and tajweed? Murottal is a style of recitation that focuses on the melody and rhythm of the Quran. Tajweed is a set of rules that governs the pronunciation and articulation of the Arabic letters and words in the Quran. Both are important aspects of reciting the Quran correctly and beautifully.
    • -
    • Who are some famous female reciters of the Quran? There are many female reciters of the Quran who have gained recognition and respect for their skills and knowledge. Some of them are Sheikha Munira Abdo, Sakina Hassan, Sheikha Mabrooka, Maria Ulfa, Samia Mubarak, Madinah Javed, Atiiqah Suhaimi, Maryam Amir, Jennifer Grout, Nusaiba Mohammad Timol, Farhatul Fairuzah, Maryam Masud, Sumayah Hassan, and many others.
    • -
    • How can I improve my murottal skills? The best way to improve your murottal skills is to practice regularly and learn from qualified teachers and reciters. You can also listen to different styles of murottal and try to imitate them. You should also pay attention to your tajweed rules and your voice quality.
    • -
    • What are some benefits of memorizing juz 30? Memorizing juz 30 has many benefits for Muslims. It can help you perform your daily prayers with ease and confidence. It can also help you recite the Quran in different occasions, such as taraweeh prayers, funerals, weddings, etc. It can also protect you from evil and bring you closer to Allah.
    • -
    • Where can I find more resources on murottal and juz 30? There are many resources online that can help you learn more about murottal and juz 30. Some of them are QuranicAudio.com , Quran Central , Internet Archive , Recite & Reflect , Qur'anic Ocean , Honest Tea Talk , WAW Creative Arts , School of Tarannum , GuideUS TV , Suhaib Webb's SWISS program , #FemaleReciters campaign , Female Reciters Project , and many others.
    • -

    -

    download mp3 murottal anak perempuan juz 30 metode ummi
    -download mp3 murottal anak perempuan juz 30 hilal salim
    -download mp3 murottal anak perempuan juz 30 baitulmedia
    -download mp3 murottal anak perempuan juz 30 soundcloud
    -download mp3 murottal anak perempuan juz 30 full
    -download mp3 murottal anak perempuan juz 30 gratis
    -download mp3 murottal anak perempuan juz 30 terbaik
    -download mp3 murottal anak perempuan juz 30 indonesia
    -download mp3 murottal anak perempuan juz 30 lengkap
    -download mp3 murottal anak perempuan juz 30 offline
    -download mp3 murottal anak perempuan juz 30 merdu
    -download mp3 murottal anak perempuan juz 30 mudah
    -download mp3 murottal anak perempuan juz 30 cepat
    -download mp3 murottal anak perempuan juz 30 tanpa iklan
    -download mp3 murottal anak perempuan juz 30 streaming
    -download mp3 murottal anak perempuan juz 30 online
    -download mp3 murottal anak perempuan juz 30 terbaru
    -download mp3 murottal anak perempuan juz 30 kualitas tinggi
    -download mp3 murottal anak perempuan juz 30 suara indah
    -download mp3 murottal anak perempuan juz 30 al quran
    -download mp3 murottal anak perempuan juz 30 surat pendek
    -download mp3 murottal anak perempuan juz 30 untuk belajar
    -download mp3 murottal anak perempuan juz 30 untuk hafalan
    -download mp3 murottal anak perempuan juz 30 untuk mengaji
    -download mp3 murottal anak perempuan juz 30 untuk tadarus
    -download mp3 murottal anak perempuan juz 30 untuk tilawah
    -download mp3 murottal anak perempuan juz 30 untuk sholat
    -download mp3 murottal anak perempuan juz 30 untuk dzikir
    -download mp3 murottal anak perempuan juz 30 untuk ibadah
    -download mp3 murottal anak perempuan juz 30 untuk menenangkan hati

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ultimate Racing Adventure with Unlimited Money and Gems in Ultimate Car Driving Simulator APK.md b/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ultimate Racing Adventure with Unlimited Money and Gems in Ultimate Car Driving Simulator APK.md deleted file mode 100644 index 1766c0e0ae49a458f920eb46589f550f880709e6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ultimate Racing Adventure with Unlimited Money and Gems in Ultimate Car Driving Simulator APK.md +++ /dev/null @@ -1,148 +0,0 @@ -
    -

    Ultimate Car Driving Simulator: A Fun and Realistic Driving Game

    -

    If you love driving cars and exploring different places, you might want to check out Ultimate Car Driving Simulator, a 3D driving game that lets you experience the thrill of driving various vehicles in a huge open world map. You can customize your car with different parts, colors, and stickers, and enjoy the realistic physics and sounds of your engine. You can also choose from different game modes and challenges, such as racing, drifting, off-road, traffic, or free roam. You can even play online with other players and show off your driving skills.

    -

    ultimate car driving simulator unlimited money and gems apk download


    DOWNLOAD –––––>>> https://urlca.com/2uO9Oo



    -

    Ultimate Car Driving Simulator is one of the best driving games on Android, with over 100 million downloads and 4.3 stars rating on Google Play. It is free to play, but it also offers in-app purchases for more money and gems, which are the currencies used in the game. Money and gems can help you unlock more cars, upgrade your parts, or buy special items. However, if you don't want to spend real money on the game, you might be interested in getting unlimited money and gems for free. How can you do that? Read on to find out.

    -

    Features of Ultimate Car Driving Simulator

    -

    Before we get into how to get unlimited money and gems in Ultimate Car Driving Simulator, let's take a look at some of the features that make this game so fun and realistic.

    -
      -
    • Huge open world map with different terrains and environments: You can drive around a city, a desert, a mountain, or a beach, and explore every corner of the map. You can also find ramps, bridges, tunnels, or obstacles to perform stunts and tricks.
    • -
    • Customizable cars with realistic physics and sounds: You can choose from over 80 cars, ranging from sports cars, muscle cars, off-road vehicles, trucks, or motorcycles. You can also modify your car's appearance and performance by changing the wheels, suspension, engine, turbo, brakes, or exhaust. You can also add stickers or paint your car with different colors. The game uses advanced physics engine and sound effects to simulate the real behavior and sound of your car.
    • -
    • Various game modes and challenges to test your driving skills: You can play in different modes, such as racing, drifting, off-road, traffic, or free roam. You can also complete various challenges, such as speed camera, checkpoint, or escape. You can earn money and gems by completing these challenges or by driving fast or far.
    • -
    • Online multiplayer mode to compete with other players: You can join online rooms and race or chat with other players from around the world. You can also create your own room and invite your friends to join. You can see the leaderboard and rankings of other players and try to beat their scores.
    • -
    • Graphics and sound quality settings to optimize your experience: You can adjust the graphics and sound quality of the game according to your device's specifications. You can also enable or disable the music, sound effects, or voice chat.
    • -
    -

    As you can see, Ultimate Car Driving Simulator is a game that offers a lot of fun and realism for car enthusiasts. However, if you want to enjoy the game to the fullest, you might need more money and gems than the game provides. That's why some players look for ways to get unlimited money and gems in the game.

    -

    How to Get Unlimited Money and Gems in Ultimate Car Driving Simulator

    -

    Money and gems are the currencies used in Ultimate Car Driving Simulator. You can use them to buy new cars, upgrade your parts, or purchase special items. You can earn money and gems by playing the game, completing challenges, or watching ads. However, these methods might not be enough if you want to unlock all the cars and features in the game. That's why some players look for shortcuts to get unlimited money and gems in the game.

    -

    One of the most common ways to get unlimited money and gems in the game is to use a modded or hacked APK file. An APK file is the file format used by Android devices to install apps. A modded or hacked APK file is a modified version of the original APK file that has been altered to give the player some advantages, such as unlimited money and gems, unlocked cars, or unlimited fuel. However, using a modded or hacked APK file is not recommended for several reasons.

    -

    ultimate car driving simulator mod apk unlimited money and gems
    -download ultimate car driving simulator hack apk with unlimited money and gems
    -ultimate car driving simulator apk mod free download (unlimited money/gems)
    -how to get unlimited money and gems in ultimate car driving simulator apk
    -ultimate car driving simulator latest version mod apk download (unlimited money/gems)
    -ultimate car driving simulator cheats apk download for android (unlimited money and gems)
    -ultimate car driving simulator premium apk mod (unlimited money/gems) offline
    -ultimate car driving simulator 2023 mod apk unlimited money and gems
    -ultimate car driving simulator unlimited money and gems apk download for pc
    -ultimate car driving simulator online mod apk (unlimited money/gems) no root
    -ultimate car driving simulator hack version download apk (unlimited money and gems)
    -ultimate car driving simulator mod menu apk download (unlimited money/gems)
    -ultimate car driving simulator mega mod apk (unlimited money/gems) android 1
    -ultimate car driving simulator pro apk mod (unlimited money/gems) revdl
    -ultimate car driving simulator full mod apk download (unlimited money/gems) rexdl
    -ultimate car driving simulator cracked apk download (unlimited money/gems) happymod
    -ultimate car driving simulator unlimited everything mod apk download
    -ultimate car driving simulator all cars unlocked mod apk download
    -ultimate car driving simulator best mod apk download (unlimited money/gems)
    -ultimate car driving simulator new update mod apk download (unlimited money/gems)
    -ultimate car driving simulator realistic mod apk download (unlimited money/gems)
    -ultimate car driving simulator extreme mod apk download (unlimited money/gems)
    -ultimate car driving simulator vip mod apk download (unlimited money/gems)
    -ultimate car driving simulator 3d mod apk download (unlimited money/gems)
    -ultimate car driving simulator hd mod apk download (unlimited money/gems)
    -ultimate car driving simulator 4x4 mod apk download (unlimited money/gems)
    -ultimate car driving simulator drift mod apk download (unlimited money/gems)
    -ultimate car driving simulator racing mod apk download (unlimited money/gems)
    -ultimate car driving simulator city mod apk download (unlimited money/gems)
    -ultimate car driving simulator offroad mod apk download (unlimited money/gems)
    -ultimate car driving simulator classic mod apk download (unlimited money/gems)
    -ultimate car driving simulator sports mod apk download (unlimited money/gems)
    -ultimate car driving simulator supercar mod apk download (unlimited money/gems)
    -ultimate car driving simulator muscle car mod apk download (unlimited money/gems)
    -ultimate car driving simulator police car mod apk download (unlimited money/gems)
    -ultimate car driving simulator truck mod apk download (unlimited money/gems)
    -ultimate car driving simulator bike mod apk download (unlimited money/gems)
    -ultimate car driving simulator airplane mod apk download (unlimited money/gems)
    -ultimate car driving simulator helicopter mod apk download (unlimited money/gems)
    -ultimate car driving simulator boat mod apk download (unlimited money/gems)
    -ultimate car driving simulator train mod apk download (unlimited money/gems)
    -ultimate car driving simulator bus mod apk download (unlimited money/gems)
    -ultimate car driving simulator taxi mod apk download (unlimited money/gems)
    -ultimate car driving simulator limo mod apk download (unlimited money/gems)
    -ultimate car driving simulator monster truck mod apk download (unlimited money/gems)
    -ultimate car driving simulator tank mod apk download (unlimited money/gems)
    -ultimate car driving simulator rocket mod apk download (unlimited money/gems)
    -ultimate car driving simulator submarine mod apk download (unlimited money/gems)

    -
      -
    • The risks of using modded or hacked APK files from unknown sources: Downloading and installing a modded or hacked APK file from an unknown source can be dangerous for your device and your privacy. You might end up downloading a virus, malware, spyware, or ransomware that can harm your device or steal your personal information. You might also get banned from the game or lose your progress if the game detects that you are using a cheat.
    • -
    • The benefits of using a trusted and verified APK file from APKdone.com: If you still want to get unlimited money and gems in Ultimate Car Driving Simulator, there is a safer and easier way to do it. You can use a trusted and verified APK file from APKdone.com, a website that provides free and secure APK files for various Android games and apps. APKdone.com has tested and verified the APK file for Ultimate Car Driving Simulator, and it guarantees that it is free from viruses, malware, spyware, or ransomware. It also ensures that the APK file works with the latest version of the game and that it does not cause any problems with your device or your account.
    • -
    • How to download and install the APK file from APKdone.com: Downloading and installing the APK file from APKdone.com is very easy and fast. You just need to follow these simple steps:
    • -
        -
      1. Go to [APKdone.com] and search for Ultimate Car Driving Simulator.
      2. -
      3. Select the latest version of the game and click on the download button.
      4. -
      5. Wait for the download to finish and locate the APK file on your device.
      6. -
      7. Before installing the APK file, make sure that you have enabled the "Unknown Sources" option on your device's settings. This will allow you to install apps from sources other than Google Play.
      8. -
      9. Tap on the APK file and follow the instructions on the screen to install it.
      10. -
      11. Launch the game and enjoy unlimited money and gems.
      12. -
      -
    • How to enjoy the game with unlimited money and gems: Once you have installed the APK file from APKdone.com, you can enjoy Ultimate Car Driving Simulator with unlimited money and gems. You can buy any car you want, upgrade it to the max, or customize it with different parts, colors, or stickers. You can also buy special items, such as nitro boosters, speed cameras, or police sirens. You can play any mode or challenge without worrying about running out of fuel or crashing your car. You can also compete with other players online without being detected or banned by the game.
    • -
    -

    With unlimited money and gems, you can have more fun and freedom in Ultimate Car Driving Simulator. However, you should also be careful not to abuse this privilege or ruin the game for yourself or others. You should still play fair and respect other players online. You should also remember that cheating is not a substitute for skill or practice. You should still try to improve your driving skills and enjoy the game as it is meant to be played.

    -

    Other Car Driving Simulator Games You Might Like

    -

    If you are looking for some other car driving simulator games to try, you might be interested in some of the following titles that are also available on Google Play or CrazyGames.com. These games offer different features, graphics, and gameplay, but they all share the same passion for driving and realism.

    -

    Here is a brief overview of some of the other popular car driving simulator games:

    -
      -
    • City Car Driving Simulator: This game lets you drive around a large city at night and perform various stunts and missions. You can choose from different cars, such as a police car, a taxi, or a sports car, and customize them with different colors and accessories. You can also use weapons, such as a machine gun or a rocket launcher, to cause chaos and destruction. The game has realistic graphics and physics, and you can adjust the traffic density and weather conditions. You can play this game for free on CrazyGames.com.
    • -
    • City Car Driving: This game is a realistic driving simulator that aims to teach you how to drive safely and efficiently in different road conditions and situations. You can choose from different cars, such as a hatchback, a sedan, or an SUV, and drive them in various locations, such as a city, a highway, or a country road. You can also customize your car's appearance and performance by changing the wheels, engine, transmission, or steering. The game has realistic graphics and sounds, and it uses an advanced traffic AI system that mimics real-life traffic behavior. You can buy this game on Google Play or Steam.
    • -
    • Extreme Car Driving Simulator: This game is a fun and fast-paced driving simulator that lets you drive freely in an open world map with no rules or limits. You can choose from different cars, such as a sports car, a muscle car, or a monster truck, and drive them with realistic physics and damage effects. You can also perform stunts, drifts, jumps, or crashes, and earn coins to unlock more cars or upgrade your parts. The game has simple graphics and controls, and it offers different camera angles and views. You can download this game for free on Google Play.
    • -
    -

    To help you compare these games more easily, here is a table that shows some of their features, ratings, and reviews:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    GameFeaturesRatingsReviews
    Ultimate Car Driving Simulator- Huge open world map with different terrains and environments
    - Customizable cars with realistic physics and sounds
    - Various game modes and challenges
    - Online multiplayer mode
    - Graphics and sound quality settings
    - 4.3 stars on Google Play
    - 4.5 stars on CrazyGames.com
    - "This is the best car simulator game I have ever played."
    - "The graphics are amazing and the cars are very realistic."
    - "The game is very fun and addictive."
    - "The only problem is that there are too many ads."
    City Car Driving Simulator- Large city map at night
    - Different cars with customization options
    - Weapons and explosives
    - Realistic graphics and physics
    - Traffic density and weather settings
    - 4.2 stars on Google Play
    - 4.3 stars on CrazyGames.com
    - "This game is awesome. It has good graphics and controls."
    - "The game is very entertaining and challenging."
    - "The game is cool but it needs more cars and missions."
    - "The game is laggy and buggy."
    City Car Driving- Realistic driving simulator with educational purpose
    - Different cars with customization options
    - Various locations with different road conditions
    - Realistic graphics and sounds
    - Advanced traffic AI system
    - 3.9 stars on Google Play
    - 4.1 stars on Steam
    - "This game is very helpful for learning how to drive."
    - "The game is very realistic and immersive."
    - "The game is good but it needs more content and updates."
    - "The game is expensive and hard."
    Extreme Car Driving Simulator- Open world map with no rules or limits
    - Different cars with realistic physics and damage effects
    - Stunts, drifts, jumps, or crashes
    - Simple graphics and controls
    - Different camera angles and views
    -

    Conclusion

    -

    In conclusion, Ultimate Car Driving Simulator is a fun and realistic driving game that lets you drive various cars in a huge open world map. You can customize your car, choose from different game modes and challenges, or play online with other players. However, if you want to get unlimited money and gems in the game, you should use a trusted and verified APK file from APKdone.com, which is free, safe, and easy to download and install. With unlimited money and gems, you can enjoy the game to the fullest without spending real money or risking your device or account.

    -

    If you are looking for other car driving simulator games to try, you can also check out some of the other popular titles, such as City Car Driving Simulator, City Car Driving, or Extreme Car Driving Simulator. These games offer different features, graphics, and gameplay, but they all share the same passion for driving and realism. You can compare these games using the table above and choose the one that suits your preferences and expectations.

    -

    So what are you waiting for? Download Ultimate Car Driving Simulator from APKdone.com or try other car driving simulator games today and have fun driving!

    -

    Thank you for reading this article and we hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Ultimate Car Driving Simulator and other car driving simulator games:

    -
      -
    • Q: How do I update Ultimate Car Driving Simulator?
      A: If you have downloaded the game from Google Play, you can update it automatically or manually from the app store. If you have downloaded the APK file from APKdone.com, you can check the website for the latest version of the game and download it again.
    • -
    • Q: How do I save my progress in Ultimate Car Driving Simulator?
      A: The game automatically saves your progress on your device. However, if you want to backup your data or transfer it to another device, you can use the cloud save feature in the game settings. You will need to connect your game account to Google Play Games or Facebook to use this feature.
    • -
    • Q: How do I play Ultimate Car Driving Simulator on PC?
      A: You can play Ultimate Car Driving Simulator on PC using an Android emulator, such as BlueStacks or NoxPlayer. These are software that allow you to run Android apps on your PC. You will need to download and install the emulator on your PC and then download and install the APK file of the game from APKdone.com.
    • -
    • Q: What are some tips and tricks for playing Ultimate Car Driving Simulator?
      A: Here are some tips and tricks for playing Ultimate Car Driving Simulator:
    • -
        -
      • - Use the nitro boosters to speed up your car and perform stunts.
      • -
      • - Use the brake button to drift or turn sharply.
      • -
      • - Use the camera button to change the view or angle of your car.
      • -
      • - Use the map button to see the whole map and find ramps, bridges, tunnels, or obstacles.
      • -
      • - Use the settings button to adjust the graphics and sound quality of the game.
      • -
      -
    • Q: What are some alternatives to Ultimate Car Driving Simulator?
      A: Some of the alternatives to Ultimate Car Driving Simulator are:
    • -
        -
      • - City Car Driving Simulator: A game that lets you drive around a city at night and use weapons and explosives.
      • -
      • - City Car Driving: A game that teaches you how to drive safely and efficiently in different road conditions and situations.
      • -
      • - Extreme Car Driving Simulator: A game that lets you drive freely in an open world map with no rules or limits.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download Battle.net and Play Blizzard Games Online.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download Battle.net and Play Blizzard Games Online.md deleted file mode 100644 index dd1ac4805ce6b31fe250574ff410bf0fe10e803b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download Battle.net and Play Blizzard Games Online.md +++ /dev/null @@ -1,123 +0,0 @@ - -

    How to Download Battle.net and Enjoy Its Benefits and Games

    -

    If you are a fan of Blizzard or Activision games, you may have heard of Battle.net. But what is it exactly, and how can you download it? In this article, we will explain what Battle.net is, how to download it on different platforms, what are its benefits, and what are some of the best games you can play on it.

    -

    download battle.net


    DOWNLOAD ✏ ✏ ✏ https://urlca.com/2uOeAQ



    -

    What is Battle.net?

    -

    Battle.net is an online gaming platform operated by Blizzard Entertainment. It was launched in 1996 as a way to connect players of Diablo online. Since then, it has evolved into a hub for all Blizzard and Activision games, such as World of Warcraft, Overwatch, Diablo, StarCraft, Hearthstone, Heroes of the Storm, Call of Duty, and more.

    -

    With Battle.net, you can access all your Blizzard and Activision games with one account. You can also chat with your friends, see what game they are playing, and join them in-game. You can also get the latest news and updates about your favorite games, as well as shop for digital games, in-game items, balance, and more.

    -

    How to Download Battle.net?

    -

    Downloading Battle.net is easy and free. Here are the steps you need to follow:

    -
      -
    • Go to the Battle.net website and click on Download.
    • -
    • Select your platform (Windows or Mac) and save the file.
    • -
    • Run the file and follow the instructions to install the Battle.net app.
    • -
    • Launch the app and log in with your Blizzard account. If you don't have one yet, you can create one for free.
    • -
    • Once logged in, you can browse through the available games and download the ones you want to play.
    • -
    -

    If you want to play Battle.net games on your mobile device or console, you will need to download them separately from their respective app stores or websites. However, you can still link your Blizzard account to your mobile or console account to sync your progress and rewards across platforms.

    -

    What are the Benefits of Battle.net?

    -

    Battle.net offers many benefits for gamers who love Blizzard or Activision games. Here are some of them:

    -

    How to download battle.net app for Windows
    -Download battle.net launcher for Mac
    -Battle.net desktop app download link
    -Download battle.net games from Blizzard website
    -Battle.net download error: how to fix it
    -Download battle.net mobile app for Android
    -Download battle.net mobile app for iOS
    -Battle.net download speed: how to increase it
    -Download battle.net authenticator for security
    -Download battle.net offline installer
    -Download battle.net beta version
    -Download battle.net update manually
    -Download battle.net client for Linux
    -Download battle.net installer for Diablo II: Resurrected
    -Download battle.net installer for Diablo IV
    -Download battle.net installer for Overwatch 2
    -Download battle.net installer for Warcraft III: Reforged
    -Download battle.net installer for StarCraft II
    -Download battle.net installer for StarCraft: Remastered
    -Download battle.net installer for Heroes of the Storm
    -Download battle.net installer for Call of Duty: Warzone 2.0
    -Download battle.net installer for Call of Duty: Vanguard
    -Download battle.net installer for Call of Duty: Black Ops Cold War
    -Download battle.net installer for Call of Duty: Modern Warfare II
    -Download battle.net installer for Crash Bandicoot 4: It's About Time
    -Battle.net download size: how much space does it take
    -Battle.net download time: how long does it take
    -Battle.net download stuck: how to resume it
    -Battle.net download paused: how to resume it
    -Battle.net download corrupted: how to repair it
    -Battle.net download not working: how to troubleshoot it
    -Battle.net download failed: how to retry it
    -Battle.net download slow: how to optimize it
    -Battle.net download region: how to change it
    -Battle.net download language: how to change it
    -Battle.net download folder: how to change it
    -Battle.net download settings: how to customize them
    -Battle.net download history: how to view it
    -Battle.net download queue: how to manage it
    -Battle.net download limit: how to bypass it
    -Battle.net download free: how to get it legally
    -Battle.net download coupon: how to get discounts on games
    -Battle.net download gift card: how to redeem them on games
    -Battle.net download code: how to activate them on games
    -Battle.net download refund policy: how to request a refund on games
    -Battle.net download reviews: what are the users saying about it
    -Battle.net download support: how to contact them for help
    -Battle.net download forum: where to find answers and tips from other users

    -
      -
    • You can play all your Blizzard and Activision games with one account and one app. You don't need to switch between different launchers or logins to enjoy your games.
    • -
    • You can chat with your friends and see what game they are playing. You can also join them in-game with a click of a button. You can also create groups and voice chat with your friends while playing.
    • -
    • You can get the latest news and updates about your favorite games. You can also watch live streams, videos, and esports events on Battle.net. You can also participate in seasonal events, community challenges, and giveaways.
    • -
    • You can shop for digital games, in-game items, balance, and more on Battle.net. You can also gift games or items to your friends or family. You can also use Blizzard Balance to pay for subscriptions or services such as World of Warcraft or Call of Duty.
    • -
    • You can link your Blizzard account to your mobile or console account to sync your progress and rewards across platforms. You can also use the Battle.net mobile app to chat with your friends, see what game they are playing, and shop for games or items.
    • -
    -

    What are the Best Games on Battle.net?

    -

    Battle.net has a wide range of games for different tastes and preferences. Whether you like action, adventure, strategy, role-playing, card, or casual games, you will find something that suits you on Battle.net. Here are some of the best games you can play on Battle.net:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    GameGenreDescription
    OverwatchFirst-person shooterA team-based multiplayer game where you can choose from a diverse cast of heroes and fight in various modes and maps. You can also customize your heroes with skins, emotes, voice lines, and more.
    World of WarcraftMassively multiplayer online role-playing gameA legendary game where you can create your own character and explore a vast world of fantasy and adventure. You can also join forces with other players to complete quests, dungeons, raids, battlegrounds, and more.
    DiabloAction role-playing gameA dark and gritty game where you can slay demons and loot treasures in a randomly generated world. You can also choose from different classes and skills to suit your playstyle.
    StarCraftReal-time strategy gameA sci-fi game where you can command one of three races: Terran, Protoss, or Zerg. You can also engage in epic battles against other players or AI opponents in various modes and maps.
    HearthstoneCollectible card gameA fun and easy-to-learn game where you can build your own deck of cards and challenge other players or AI opponents in different modes and formats. You can also collect cards from different expansions and adventures.
    -

    These are just some of the games you can play on Battle.net. There are many more to discover and enjoy, such as Heroes of the Storm, Call of Duty, Destiny 2, and more. You can also look forward to new games and updates coming soon, such as Diablo IV, Overwatch 2, and more.

    -

    Conclusion

    -

    Battle.net is an online gaming platform that lets you play all your Blizzard and Activision games with one account and one app. You can also chat with your friends, get the latest news and updates, shop for games and items, and more. Downloading Battle.net is easy and free. You just need to follow the steps we explained in this article. Once you have Battle.net, you can enjoy some of the best games in the industry, such as Overwatch, World of Warcraft, Diablo, StarCraft, Hearthstone, and more. So what are you waiting for? Download Battle.net today and join the millions of gamers who love it!

    -

    FAQs

    -

    Is Battle.net free to download and use?

    -

    Yes, Battle.net is free to download and use. However, some games may require a subscription or a purchase to play.

    -

    Can I play games from other platforms on Battle.net?

    -

    No, Battle.net only supports games from Blizzard and Activision. You cannot play games from other platforms such as Steam or Epic Games on Battle.net.

    -

    How can I contact Blizzard support if I have any issues with Battle.net?

    -

    You can contact Blizzard support through their website, phone, or chat. You can also visit their forums or social media pages for help from other players or community managers.

    -

    How can I update my Battle.net app and games?

    -

    The Battle.net app and games will automatically update when you launch them. You can also check for updates manually by clicking on the Options menu in the app or game launcher.

    -

    How can I uninstall Battle.net app and games?

    -

    You can uninstall Battle.net app and games by following the instructions on this page.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install SS Black Edition APK for Ad-Free Streaming on Android and Fire TV.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install SS Black Edition APK for Ad-Free Streaming on Android and Fire TV.md deleted file mode 100644 index aecb730b278cbcb616c3f7bf8469dd3a3477527a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Install SS Black Edition APK for Ad-Free Streaming on Android and Fire TV.md +++ /dev/null @@ -1,126 +0,0 @@ - -

    SS Black Edition APK: How to Download and Install It on Your Fire TV Device

    -

    Do you want to enjoy unlimited streaming of movies, TV shows, sports, and more on your Fire TV device? If yes, then you need to download and install SS Black Edition APK. This is a modified version of the popular SS IPTV app that offers you access to thousands of channels from different countries and genres. In this article, we will show you what SS Black Edition APK is, how to download it, how to install it, and how to use it on your Fire TV device.

    -

    What is SS Black Edition APK?

    -

    SS Black Edition APK is a modified version of the original SS IPTV app that allows you to watch live TV channels from various sources on your Android device. It has a sleek and simple interface that makes it easy to navigate and use. It also has some extra features and benefits that make it stand out from other IPTV apps.

    -

    ss black edition apk free download


    Download File ✏ ✏ ✏ https://urlca.com/2uOcfq



    -

    Features of SS Black Edition APK

    -

    Some of the features of SS Black Edition APK are:

    -
      -
    • It supports multiple playlists and EPG sources.
    • -
    • It has a built-in video player that supports various formats and codecs.
    • -
    • It has a parental control option that lets you restrict access to certain channels or categories.
    • -
    • It has a favorites section that lets you save your preferred channels for quick access.
    • -
    • It has a search function that lets you find any channel or program by name or keyword.
    • -
    • It has a settings section that lets you customize the app according to your preferences.
    • -
    -

    Benefits of SS Black Edition APK

    -

    Some of the benefits of SS Black Edition APK are:

    -
      -
    • It is free to download and use. You don't need to pay any subscription fees or sign up for any accounts.
    • -
    • It is ad-free. You don't have to deal with any annoying ads or pop-ups that interrupt your viewing experience.
    • -
    • It is virus-free. You don't have to worry about any malware or spyware that might harm your device or compromise your privacy.
    • -
    • It is updated regularly. You don't have to miss out on any new channels or features that might be added in the future.
    • -
    -

    How to Download SS Black Edition APK

    -

    If you want to download SS Black Edition APK, you need to follow some simple steps. Here they are:

    -

    Requirements for Downloading SS Black Edition APK

    -

    Before you download SS Black Edition APK, you need to make sure that you have the following requirements:

    -
      -
    • An Android device that runs on Android 4.1 or higher.
    • -
    • A stable internet connection.
    • -
    • A file manager app that can open and install APK files.
    • -
    • A web browser app that can access third-party websites.
    • -
    -

    Steps for Downloading SS Black Edition APK

    -

    Once you have the requirements, you can proceed with the steps for downloading SS Black Edition APK:

    -

    ss black edition apk free download for fire tv
    -ss black edition apk free download no ads
    -ss black edition apk free download latest version
    -ss black edition apk free download reddit
    -ss black edition apk free download for android tv
    -ss black edition apk free download swift streamz
    -ss black edition apk free download 2023
    -ss black edition apk free download malware
    -ss black edition apk free download for pc
    -ss black edition apk free download for smart tv
    -ss black edition apk free download for firestick
    -ss black edition apk free download modded
    -ss black edition apk free download cracked
    -ss black edition apk free download safe
    -ss black edition apk free download link
    -ss black edition apk free download update
    -ss black edition apk free download review
    -ss black edition apk free download virus
    -ss black edition apk free download online
    -ss black edition apk free download tutorial
    -ss black edition apk free download for windows 10
    -ss black edition apk free download for mac
    -ss black edition apk free download for ios
    -ss black edition apk free download for iphone
    -ss black edition apk free download for ipad
    -ss black edition apk free download for roku
    -ss black edition apk free download for chromecast
    -ss black edition apk free download for nvidia shield
    -ss black edition apk free download for samsung tv
    -ss black edition apk free download for lg tv
    -ss black edition apk free download for sony tv
    -ss black edition apk free download for mi tv
    -ss black edition apk free download for tcl tv
    -ss black edition apk free download for hisense tv
    -ss black edition apk free download for vizio tv
    -ss black edition apk free download for fire cube
    -ss black edition apk free download for fire tablet
    -ss black edition apk free download for android box
    -ss black edition apk free download for android phone
    -ss black edition apk free download for android tablet

    -
      -
    1. Open your web browser app and go to this link: . This is the official Reddit post where you can find the download link for SS Black Edition APK.
    2. -
    3. Scroll down until you see the download link. It should look like this: https://bit.ly/2SsBlackEdition. Tap on it to start the download process.
    4. -
    5. Wait for the download to finish. You should see a notification that says "Download complete" or something similar.
    6. -
    7. Go to your file manager app and locate the downloaded APK file. It should be in your Downloads folder or wherever you set your default download location.
    8. -
    9. Tap on the APK file to open it. You should see a prompt that asks you to allow the installation of unknown apps. Tap on "Settings" and enable the option that says "Allow from this source" or something similar.
    10. -
    11. Go back to the APK file and tap on it again. You should see another prompt that asks you to confirm the installation. Tap on "Install" and wait for the installation to finish.
    12. -
    13. You should see a message that says "App installed" or something similar. Tap on "Open" to launch the app or tap on "Done" to exit the installer.
    14. -
    -

    How to Install SS Black Edition APK on Your Fire TV Device

    -

    If you want to install SS Black Edition APK on your Fire TV device, you need to follow some different steps. Here they are:

    -

    Requirements for Installing SS Black Edition APK

    -

    Before you install SS Black Edition APK, you need to make sure that you have the following requirements:

    -
      -
    • A Fire TV device that runs on Fire OS 5 or higher.
    • -
    • A stable internet connection.
    • -
    • A downloader app that can access third-party websites and download APK files.
    • -
    • A file explorer app that can open and install APK files.
    • -
    -

    Steps for Installing SS Black Edition APK

    -

    Once you have the requirements, you can proceed with the steps for installing SS Black Edition APK:

    -
      -
    1. Go to your Fire TV device's settings and select "My Fire TV" or "Device". Then select "Developer options" and enable the option that says "Apps from Unknown Sources". This will allow you to install third-party apps on your device.
    2. -
    3. Go back to your Fire TV device's home screen and select the search icon. Type in "Downloader" and select the app that has an orange icon with a white arrow. Download and install the app if you don't have it already.
    4. -
    5. Open the downloader app and enter this URL: https://bit.ly/2SsBlackEdition. This is the same URL as before, but shortened for convenience. Tap on "Go" to start the download process.
    6. -
    7. Wait for the download to finish. You should see a prompt that asks you to install the app. Tap on "Install" and wait for the installation to finish.
    8. -
    9. You should see a message that says "App installed". Tap on "Open" to launch the app or tap on "Done" to exit the installer.
    10. -
    11. Delete the downloaded APK file from your downloader app to save some space. You can do this by tapping on "Files" and selecting the file. Then tap on "Delete" and confirm your choice.
    12. -
    -

    How to Use SS Black Edition APK on Your Fire TV Device

    -

    If you want to use SS Black Edition APK on your Fire TV device, you need to follow some simple steps. Here they are:

    -

    Features of SS Black Edition APK on Your Fire TV Device

    -

    Some of the features of SS Black Edition APK on your Fire TV device are:

    -
      -
    • It has a remote-friendly interface that makes it easy to navigate and use with your Fire TV remote.
    • -
    • It has a full-screen mode that lets you enjoy your streaming content without any distractions.
    • -
    • It has a fast-loading speed that lets you watch your channels without any buffering or lagging.
    • -
    • It has a high-quality resolution that lets you watch your channels in HD or 4K quality depending on your device and internet speed.
    • -
    -

    Tips and Tricks for Using SS Black Edition APK on Your Fire TV Device

    -

    Some of the tips and tricks for using SS Black Edition APK on your Fire TV device are:

    -
      -
    • To add a playlist or an EPG source, go to the settings section and select "General". Then tap on "Playlist" or "EPG" and enter the URL of your source. You can find many sources online or ask your IPTV provider for one.
    • -
    • To change the language, go to the settings section and select "Interface". Then tap on "Language" and choose your preferred language from the list.
    • -
    • To enable parental control, go to the settings section and select "Parent "About". Then tap on "Check for updates" and follow the instructions. You can also visit this link: to see if there is a new version available for download.
    • -
    • Q: How can I contact the developer of SS Black Edition APK?
      A: You can contact the developer of SS Black Edition APK by visiting this link: . This is the official Reddit page where you can find the latest news, updates, and feedback about SS Black Edition APK. You can also leave a comment or send a message to the developer if you have any questions or suggestions.
    • - -

      I hope you enjoyed reading this article and learned something new. If you did, please share it with your friends and family who might be interested in SS Black Edition APK. Thank you for your time and attention.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Unlimited Cards and Coins with Coin Master MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Unlimited Cards and Coins with Coin Master MOD APK.md deleted file mode 100644 index f16d4553804ceb8d139c7e28b0034d94a1191b9d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Unlimited Cards and Coins with Coin Master MOD APK.md +++ /dev/null @@ -1,94 +0,0 @@ -
      -

      How to Unlock Your Android Phone with Card APK

      -

      If you have an Android phone that is locked to a specific carrier, you may want to unlock it for various reasons. For example, you may want to switch to a different network that offers better plans or coverage, or you may want to use your phone abroad without paying expensive roaming fees. However, unlocking your phone can be a hassle, especially if you have to contact your carrier and pay a fee. Fortunately, there is a simpler and cheaper way to unlock your Android phone: using Card APK.

      -

      What is Card APK and Why You Need It

      -

      Card APK is a tool that can unlock your Android phone from any carrier

      -

      Card APK is an application that can generate an unlock code for your Android phone based on its IMEI number. IMEI stands for International Mobile Equipment Identity, and it is a unique 15-digit number that identifies your device. By entering this code into your phone, you can unlock it from any carrier and use it with any SIM card.

      -

      card apk unlocked


      Download ⇒⇒⇒ https://urlca.com/2uOfHR



      -

      You may need Card APK if you want to switch to a different network or travel abroad

      -

      If you bought your Android phone from a carrier, chances are it is locked to that carrier. This means you can only use it with their SIM card and their network. If you want to switch to a different carrier, you have to unlock your phone first. Otherwise, your phone will not recognize the new SIM card and will display an error message.

      -

      Similarly, if you want to use your phone abroad, you have to unlock it first. Otherwise, you will have to pay expensive roaming fees or buy a local SIM card. By unlocking your phone with Card APK, you can save money and enjoy more flexibility.

      -

      How to Download and Install Card APK on Your Android Phone

      -

      You can download Card APK from its official website or other sources

      -

      To use Card APK, you need to download and install it on your Android phone first. You can find the latest version of Card APK on its official website, or you can search for other sources online. However, be careful when downloading from unknown sources, as they may contain malware or viruses.

      -

      You need to enable unknown sources and grant permissions to install Card APK

      -

      Since Card APK is not available on the Google Play Store, you need to enable unknown sources on your phone settings before installing it. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps from sources other than the Play Store.

      -

      After enabling unknown sources, you need to grant permissions to Card APK to access your device information and storage. To do this, open the downloaded file and tap on Install. Then, follow the instructions on the screen and accept the permissions requested by Card APK.

      -

      How to Use Card APK to Unlock Your Android Phone

      -

      You need to insert a new SIM card and launch Card APK

      -

      Once you have installed Card APK on your phone, you need to insert a new SIM card from a different carrier into your phone. Make sure the SIM card is compatible with your phone model and size. Then, turn on your phone and launch Card APK from your app drawer.

      -

      You need to enter the IMEI number and the unlock code provided by Card APK

      -

      When you open Card APK, you will see a screen that asks you to enter your IMEI number and the unlock code. You can find your IMEI number by dialing *#06# on your phone or by checking the sticker under your battery. You can get the unlock code by tapping on the Generate Code button on Card APK. This will connect you to the Card APK server and generate a unique code for your phone.

      -

      After entering the IMEI number and the unlock code, tap on the Unlock button and wait for a few seconds. You will see a confirmation message that says your phone is unlocked successfully.

      -

      card games apk unlocked
      -card wars apk unlocked
      -card crawl apk unlocked
      -card thief apk unlocked
      -card maker apk unlocked
      -card shark apk unlocked
      -card quest apk unlocked
      -card life apk unlocked
      -card wars 2 apk unlocked
      -card city nights apk unlocked
      -card dungeon apk unlocked
      -card monsters apk unlocked
      -card heroes apk unlocked
      -card wars kingdom apk unlocked
      -card wars adventure time apk unlocked
      -card board games apk unlocked
      -card battle games apk unlocked
      -card collecting games apk unlocked
      -card strategy games apk unlocked
      -card rpg games apk unlocked
      -sim card unlock apk
      -sd card unlock apk
      -memory card unlock apk
      -sim card unlocker apk
      -sd card unlocker apk
      -memory card unlocker apk
      -sim network unlock pin apk
      -imei unlocker sim unlock apk
      -unlocksimphone sim unlock apk
      -doctor sim unlock service apk
      -free sim unlock code generator apk
      -worldunlock codes calculator sim unlock apk
      -free imei sim unlock code generator apk
      -free sim network unlock pin code generator apk
      -free sim network unlock pin software for android phones and tablets apk
      -free sim network unlock pin software for samsung galaxy s3 s4 s5 s6 edge note 2 3 4 5 j1 j2 j3 j5 j7 a3 a5 a7 e5 e7 grand prime core prime alpha mega metro pcs t-mobile at&t verizon cricket boost mobile sprint tracfone net10 straight talk simple mobile lyca mobile h20 wireless gsm cdma lte 4g 3g 2g android smartphone tablet device locked to any carrier network in the world by imei number using our code generator software tool app online service from your computer pc mac laptop tablet or mobile phone device no root no jailbreak no cables required 100% guaranteed to work or your money back no questions asked instant delivery 24/7 support lifetime updates free download now and get your phone or tablet unlocked in minutes easy fast and safe method to use at home or anywhere you want no technical skills or experience needed just follow the simple instructions on the screen and enjoy your freedom today with this amazing software tool app online service from our website link below: [^1^]

      -

      You need to restart your phone and enjoy the unlocked features

      -

      The final step is to restart your phone and check if the new SIM card is working properly. You should be able to make and receive calls, send and receive texts, and access the internet with the new network. You can also check the network settings and change them according to your preferences.

      -

      Congratulations, you have unlocked your Android phone with Card APK. You can now use your phone with any carrier and any SIM card you want.

      -

      Pros and Cons of Using Card APK to Unlock Your Android Phone

      -

      Pros: fast, easy, cheap, and permanent unlocking solution

      -

      There are many benefits of using Card APK to unlock your Android phone. Some of them are:

      -
        -
      • It is fast: you can unlock your phone in a matter of minutes without waiting for your carrier or a third-party service.
      • -
      • It is easy: you don't need any technical skills or special equipment to use Card APK. All you need is a new SIM card and an internet connection.
      • -
      • It is cheap: you don't have to pay any fees or charges to use Card APK. It is a free app that you can download and use as many times as you want.
      • -
      • It is permanent: once you unlock your phone with Card APK, it will stay unlocked forever. You don't have to worry about relocking or updating your phone.
      • -
      -

      Cons: may void your warranty, may not work for some models, may pose security risks

      -

      However, there are also some drawbacks of using Card APK to unlock your Android phone. Some of them are:

      -
        -
      • It may void your warranty: unlocking your phone with Card APK may violate the terms and conditions of your carrier or manufacturer. This means you may lose your warranty or support if something goes wrong with your phone.
      • -
      • It may not work for some models: Card APK may not be compatible with some Android phone models or versions. This means you may not be able to unlock your phone or you may encounter errors or bugs while using Card APK.
      • -
      • It may pose security risks: downloading and installing Card APK from unknown sources may expose your phone to malware or viruses. This means you may compromise your personal data or damage your device.
      • -
      -

      Conclusion

      -

      Card APK is a convenient way to unlock your Android phone from any carrier and use it with any SIM card. It is fast, easy, cheap, and permanent. However, you should also be aware of the potential drawbacks of using Card APK, such as voiding your warranty, not working for some models, or posing security risks. Therefore, you should use Card APK at your own risk and discretion.

      -

      Frequently Asked Questions

      -
        -
      1. What is Card APK?
      2. -

        Card APK is an application that can generate an unlock code for your Android phone based on its IMEI number.

        -
      3. How does Card APK work?
      4. -

        Card APK works by connecting to its server and generating a unique code for your phone. You need to enter this code into your phone to unlock it from any carrier.

        -
      5. Is Card APK safe to use?
      6. -

        Card APK is safe to use as long as you download it from its official website or other trusted sources. However, you should also be careful about granting permissions to Card APK and protecting your device from malware or viruses.

        -
      7. Does Card APK work for all Android phones?
      8. -

        Card APK works for most Android phones that are locked to a specific carrier. However, it may not work for some models or versions that are not supported by Card APK.

        -
      9. Can I use Card APK more than once?
      10. -

        Yes, Yes, you can use Card APK more than once to unlock different phones or the same phone with different SIM cards. However, you should not abuse Card APK or use it for illegal purposes.

        -

        I hope this article has helped you understand how to unlock your Android phone with Card APK. If you have any questions or feedback, please leave a comment below. Thank you for reading.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/PUBG NEW STATE MOD APK - Unlimited Money Aimbot and More.md b/spaces/congsaPfin/Manga-OCR/logs/PUBG NEW STATE MOD APK - Unlimited Money Aimbot and More.md deleted file mode 100644 index ef6d7e9ba42d3224de0919a1a612412971067ef7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/PUBG NEW STATE MOD APK - Unlimited Money Aimbot and More.md +++ /dev/null @@ -1,121 +0,0 @@ -
        -

        New State Mobile Mod APK Hack: Everything You Need to Know

        -

        Are you a fan of battle royale games? Do you want to experience a new and exciting game that takes you to the future? If yes, then you might want to check out New State Mobile, a game that is set in 2051 and offers you a thrilling and immersive gameplay. But what if you want to have an edge over your opponents and enjoy some extra features that are not available in the original game? Well, that's where a mod apk hack comes in handy. In this article, we will tell you everything you need to know about New State Mobile mod apk hack, including what it is, how to download and install it, and what features it offers. Let's get started!

        -

        new state mobile mod apk hack


        DOWNLOADhttps://urlca.com/2uO9cQ



        -

        What is New State Mobile?

        -

        A futuristic battle royale game

        -

        New State Mobile is a game developed by Krafton, the same company that created PUBG Mobile, one of the most popular battle royale games in the world. New State Mobile is a spin-off of PUBG Mobile, but it is set in 2051, a time when the world has changed drastically due to wars, disasters, and technology. The game takes place in a fictional country called Troi, where 100 players compete against each other in a shrinking map until only one survives. The game features realistic graphics, dynamic sound effects, and smooth controls that make you feel like you are in the middle of a futuristic warzone.

        -

        The features and gameplay of New State Mobile

        -

        New State Mobile has many features and gameplay elements that make it stand out from other battle royale games. Some of them are:

        -
          -
        • You can customize your character with various outfits, skins, accessories, and emotes.
        • -
        • You can choose from different modes, such as solo, duo, squad, or team deathmatch.
        • -
        • You can explore different maps, such as urban areas, industrial zones, rural landscapes, or snowy mountains.
        • -
        • You can use different weapons, such as assault rifles, sniper rifles, shotguns, pistols, grenades, or melee weapons.
        • -
        • You can also use futuristic gadgets, such as drones, shields, holograms, or vehicles.
        • -
        • You can interact with the environment, such as breaking windows, doors, walls, or objects.
        • -
        • You can upgrade your weapons and gadgets with attachments and mods.
        • -
        • You can collect resources and craft items that can help you survive.
        • -
        -

        What is a mod apk hack?

        -

        A modified version of the original app

        -

        A mod apk hack is a modified version of the original app that has been altered by someone to add or remove some features. A mod apk hack usually comes in the form of a file that you can download and install on your device. A mod apk hack can give you access to features that are not available in the original app or that require you to pay money or watch ads. For example, a mod apk hack can give you unlimited coins, gems, health, ammo, or other resources. A mod apk hack can also unlock all the items, levels, modes, or characters that are otherwise locked or restricted.

        -

        PUBG: NEW STATE v0.9.49.456 MOD APK (Mega Menu)
        -PUBG: NEW STATE MOD APK Download Latest Version 2021
        -PUBG: NEW STATE Hack APK 0.9.5.29 (Unlimited UC) for Android
        -How to Install PUBG: NEW STATE MOD APK on Android Device
        -PUBG: NEW STATE MOD Menu APK with Aimbot, Wallhack, ESP
        -PUBG: NEW STATE APK + OBB Data 0.9.5.29 for Android
        -PUBG: NEW STATE MOD APK Unlimited Money and Gold
        -PUBG: NEW STATE Hack Tool Online Generator 2021
        -PUBG: NEW STATE MOD APK No Root Required
        -PUBG: NEW STATE Cheats, Tips and Tricks for Beginners
        -PUBG: NEW STATE MOD APK with Anti-Ban Feature
        -PUBG: NEW STATE Hack APK Download for iOS Devices
        -PUBG: NEW STATE MOD APK Free Download Latest Version
        -PUBG: NEW STATE Hack APK with God Mode, Unlimited Ammo
        -PUBG: NEW STATE MOD APK with All Weapons Unlocked
        -PUBG: NEW STATE Hack Online No Survey No Human Verification
        -PUBG: NEW STATE MOD APK with High Damage and Speed
        -PUBG: NEW STATE Hack APK with Auto Aim and Fire
        -PUBG: NEW STATE MOD APK with VIP Features and Premium Access
        -PUBG: NEW STATE Hack Tool No Download No Password
        -PUBG: NEW STATE MOD APK with Custom Skins and Outfits
        -PUBG: NEW STATE Hack APK with Unlimited Health and Armor
        -PUBG: NEW STATE MOD APK with No Ads and No Lag
        -PUBG: NEW STATE Hack Generator 2021 Free UC and BP
        -PUBG: NEW STATE MOD APK with In-game Chat and Voice Chat
        -PUBG: NEW STATE Hack APK with Radar Hack and No Recoil
        -PUBG: NEW STATE MOD APK with Easy Installation and Update
        -PUBG: NEW STATE Hack Tool 100% Working and Safe
        -PUBG: NEW STATE MOD APK with Realistic Graphics and Sound Effects
        -PUBG: NEW STATE Hack APK with Support for All Devices and Platforms

        -

        The benefits and risks of using a mod apk hack

        -

        Using a mod apk hack can have some benefits and risks that you should be aware of before deciding to use one. Some of the benefits are:

        -
          -
        • You can enjoy the game more by having more options and freedom.
        • -
        • You can save time and money by not having to grind or spend money to get the resources or items you want.
        • -
        • You can have more fun by experimenting with different features and settings.
        • -
        -

        Some of the risks are:

        -
          -
        • You can get banned from the game or the app store if the developers detect that you are using a mod apk hack.
        • -
        • You can expose your device to malware, viruses, or spyware that can harm your data or privacy.
        • -
        • You can lose your progress or account if the mod apk hack is not compatible with the latest version of the game or the app.
        • -
        • You can ruin the balance and fairness of the game by having an unfair advantage over other players.
        • -
        -

        How to download and install New State Mobile mod apk hack?

        -

        The steps to follow

        -

        If you want to download and install New State Mobile mod apk hack, you need to follow these steps:

        -
          -
        1. Find a reliable and safe website that offers the mod apk hack file. You can search on Google or use a website like [ModApkStore].
        2. -
        3. Download the mod apk hack file to your device. Make sure you have enough storage space and a stable internet connection.
        4. -
        5. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
        6. -
        7. Locate the mod apk hack file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
        8. -
        9. Launch the game and enjoy the mod features.
        10. -
        -

        The precautions to take

        -

        Before you download and install New State Mobile mod apk hack, you should take some precautions to avoid any problems or issues. Some of them are:

        -
          -
        • Backup your data and account before using the mod apk hack. You can do this by using a cloud service like Google Drive or Dropbox, or by using a third-party app like [Titanium Backup].
        • -
        • Disable any antivirus or firewall software on your device that might interfere with the mod apk hack. You can do this by going to Settings > Apps > Antivirus/Firewall and tapping on Force Stop or Disable.
        • -
        • Use a VPN service to hide your IP address and location from the developers or other players. You can do this by downloading a VPN app like [ExpressVPN] or [NordVPN] and connecting to a server of your choice.
        • -
        • Use a secondary or fake account to play the game with the mod apk hack. You can do this by creating a new account with a different email address or using a guest account.
        • -
        -

        What are the features of New State Mobile mod apk hack?

        -

        A mega menu with various options

        -

        New State Mobile mod apk hack has a mega menu that you can access by tapping on a floating icon on the screen. The mega menu has various options that you can toggle on or off according to your preference. Some of the options are:

        -
          -
        • Aimbot: This option allows you to automatically aim at your enemies and shoot them with accuracy.
        • -
        • Wallhack: This option allows you to see through walls and objects and spot your enemies easily.
        • -
        • No Recoil: This option allows you to shoot without any recoil or spread, making your shots more stable and precise.
        • -
        • No Fog: This option allows you to remove any fog or smoke from the map, making it clearer and brighter.
        • -
        • No Grass: This option allows you to remove any grass or vegetation from the map, making it easier to spot your enemies.
        • -
        -

        Some examples of the mod features

        -

        To give you an idea of how the mod features work, here are some examples of what you can do with them:

        -
          -
        • You can use aimbot to snipe your enemies from a long distance without missing a shot.
        • -
        • You can use wallhack to ambush your enemies from behind cover or surprise them from above or below.
        • -
        • You can use no recoil to spray your enemies with bullets without losing control of your weapon.
        • -
        • You can use no fog to see everything clearly even in dark or cloudy weather conditions.
        • -
        • You can use no grass to spot any enemies hiding in bushes or fields.
        • -
        -

        Conclusion

        -

        A summary of the main points

        -

        New State Mobile is a futuristic battle royale game that offers you a thrilling and immersive gameplay. However, if you want to have more fun and excitement, you can try using a mod apk hack that gives you access to features that are not available in the original game or that require you to pay money or watch ads. For example, you can use a mod apk hack that has a mega menu with options such as aimbot, wallhack, no recoil, no fog, or no grass. However, you should also be aware of the risks and precautions of using a mod apk hack, such as getting banned, exposing your device to malware, losing your progress or account, or ruining the balance and fairness of the game. Therefore, you should use a mod apk hack at your own discretion and responsibility.

        -

        A call to action

        -

        If you are interested in trying out New State Mobile mod apk hack, you can download it from a reliable and safe website like [ModApkStore]. You can also check out other mod apk hacks for different games and apps on the same website. However, before you download and install any mod apk hack, make sure you backup your data and account, disable any antivirus or firewall software, use a VPN service, and use a secondary or fake account. We hope you found this article helpful and informative. If you did, please share it with your friends and leave a comment below. Thank you for reading and have fun playing New State Mobile with the mod apk hack!

        -

        FAQs

        -

        What is New State Mobile?

        -

        New State Mobile is a futuristic battle royale game that is set in 2051 and offers you a thrilling and immersive gameplay.

        -

        What is a mod apk hack?

        -

        A mod apk hack is a modified version of the original app that has been altered by someone to add or remove some features.

        -

        How to download and install New State Mobile mod apk hack?

        -

        You need to find a reliable and safe website that offers the mod apk hack file, download it to your device, enable the installation of apps from unknown sources, locate the file and tap on it to start the installation process, and launch the game and enjoy the mod features.

        -

        What are the features of New State Mobile mod apk hack?

        -

        New State Mobile mod apk hack has a mega menu that has options such as aimbot, wallhack, no recoil, no fog, or no grass.

        -

        What are the risks and precautions of using a mod apk hack?

        -

        You can get banned from the game or the app store, expose your device to malware or viruses, lose your progress or account, or ruin the balance and fairness of the game. You should backup your data and account, disable any antivirus or firewall software, use a VPN service, and use a secondary or fake account.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/FIFA 14 PC Crack Out - Where to Find and Download the Full Cracked Game Safely.md b/spaces/contluForse/HuggingGPT/assets/FIFA 14 PC Crack Out - Where to Find and Download the Full Cracked Game Safely.md deleted file mode 100644 index 1a6bc11411eb547027b7234f7095f666dcb5660c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/FIFA 14 PC Crack Out - Where to Find and Download the Full Cracked Game Safely.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        Experience the exciting and fun gameplay of Fifa 14 full game yourself with the use of Fifa 14 Crack. It is easy to install and has multiple platform support. The long waiting for us football fans has now come to an end. Fifa 14 is finally here and I am sure this will make you happy knowing the fact that you can enjoy playing this games at zero cost simply downloading the fully working and latest Fifa 14 Crack. Whether you are on PC, Android, iOS or even Nintendo 3DS, we got you covered and we will make sure to keep the majority happy.

        -

        fifa 14 pc crack out full cracked game


        Download Zip ✫✫✫ https://ssurll.com/2uzymM



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/transforms.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/dachenchen/HiWantJoin/chatgpt - windows.bat b/spaces/dachenchen/HiWantJoin/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/dachenchen/HiWantJoin/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/damian0815/Erasing-Concepts-In-Diffusion/util.py b/spaces/damian0815/Erasing-Concepts-In-Diffusion/util.py deleted file mode 100644 index e2b9860b348e005613d55637c399f74b7228f5fc..0000000000000000000000000000000000000000 --- a/spaces/damian0815/Erasing-Concepts-In-Diffusion/util.py +++ /dev/null @@ -1,107 +0,0 @@ -from PIL import Image -from matplotlib import pyplot as plt -import textwrap - - -def to_gif(images, path): - - images[0].save(path, save_all=True, - append_images=images[1:], loop=0, duration=len(images) * 20) - - -def figure_to_image(figure): - - figure.set_dpi(300) - - figure.canvas.draw() - - return Image.frombytes('RGB', figure.canvas.get_width_height(), figure.canvas.tostring_rgb()) - - -def image_grid(images, outpath=None, column_titles=None, row_titles=None): - - n_rows = len(images) - n_cols = len(images[0]) - - fig, axs = plt.subplots(nrows=n_rows, ncols=n_cols, - figsize=(n_cols, n_rows), squeeze=False) - - for row, _images in enumerate(images): - - for column, image in enumerate(_images): - ax = axs[row][column] - ax.imshow(image) - if column_titles and row == 0: - ax.set_title(textwrap.fill( - column_titles[column], width=12), fontsize='x-small') - if row_titles and column == 0: - ax.set_ylabel(row_titles[row], rotation=0, fontsize='x-small', labelpad=1.6 * len(row_titles[row])) - ax.set_xticks([]) - ax.set_yticks([]) - - plt.subplots_adjust(wspace=0, hspace=0) - - if outpath is not None: - plt.savefig(outpath, bbox_inches='tight', dpi=300) - plt.close() - else: - plt.tight_layout(pad=0) - image = figure_to_image(plt.gcf()) - plt.close() - return image - - - - - - - -def get_module(module, module_name): - - if isinstance(module_name, str): - module_name = module_name.split('.') - - if len(module_name) == 0: - return module - else: - module = getattr(module, module_name[0]) - return get_module(module, module_name[1:]) - - -def set_module(module, module_name, new_module): - - if isinstance(module_name, str): - module_name = module_name.split('.') - - if len(module_name) == 1: - return setattr(module, module_name[0], new_module) - else: - module = getattr(module, module_name[0]) - return set_module(module, module_name[1:], new_module) - - -def freeze(module): - - for parameter in module.parameters(): - - parameter.requires_grad = False - - -def unfreeze(module): - - for parameter in module.parameters(): - - parameter.requires_grad = True - - -def get_concat_h(im1, im2): - dst = Image.new('RGB', (im1.width + im2.width, im1.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (im1.width, 0)) - return dst - -def get_concat_v(im1, im2): - dst = Image.new('RGB', (im1.width, im1.height + im2.height)) - dst.paste(im1, (0, 0)) - dst.paste(im2, (0, im1.height)) - return dst \ No newline at end of file diff --git a/spaces/davidmd/lane_detection_UNet_Model/README.md b/spaces/davidmd/lane_detection_UNet_Model/README.md deleted file mode 100644 index 1e1cfe2f5521620a9d9d388ce68b631bd079871c..0000000000000000000000000000000000000000 --- a/spaces/davidmd/lane_detection_UNet_Model/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lane_detection_UNet_Model -emoji: 👁 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 2.8.8 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py deleted file mode 100644 index dbe4e170e05894c12ebdc36ba1dc1de65e441b89..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py +++ /dev/null @@ -1,192 +0,0 @@ -""" -Feature Fusion for Varible-Length Data Processing -AFF/iAFF is referred and modified from https://github.com/YimianDai/open-aff/blob/master/aff_pytorch/aff_net/fusion.py -According to the paper: Yimian Dai et al, Attentional Feature Fusion, IEEE Winter Conference on Applications of Computer Vision, WACV 2021 -""" - -import torch -import torch.nn as nn - - -class DAF(nn.Module): - """ - 直接相加 DirectAddFuse - """ - - def __init__(self): - super(DAF, self).__init__() - - def forward(self, x, residual): - return x + residual - - -class iAFF(nn.Module): - """ - 多特征融合 iAFF - """ - - def __init__(self, channels=64, r=4, type="2D"): - super(iAFF, self).__init__() - inter_channels = int(channels // r) - - if type == "1D": - # 本地注意力 - self.local_att = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - - # 全局注意力 - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - - # 第二次本地注意力 - self.local_att2 = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - # 第二次全局注意力 - self.global_att2 = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - elif type == "2D": - # 本地注意力 - self.local_att = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - - # 全局注意力 - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - - # 第二次本地注意力 - self.local_att2 = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - # 第二次全局注意力 - self.global_att2 = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - else: - raise f"the type is not supported" - - self.sigmoid = nn.Sigmoid() - - def forward(self, x, residual): - flag = False - xa = x + residual - if xa.size(0) == 1: - xa = torch.cat([xa, xa], dim=0) - flag = True - xl = self.local_att(xa) - xg = self.global_att(xa) - xlg = xl + xg - wei = self.sigmoid(xlg) - xi = x * wei + residual * (1 - wei) - - xl2 = self.local_att2(xi) - xg2 = self.global_att(xi) - xlg2 = xl2 + xg2 - wei2 = self.sigmoid(xlg2) - xo = x * wei2 + residual * (1 - wei2) - if flag: - xo = xo[0].unsqueeze(0) - return xo - - -class AFF(nn.Module): - """ - 多特征融合 AFF - """ - - def __init__(self, channels=64, r=4, type="2D"): - super(AFF, self).__init__() - inter_channels = int(channels // r) - - if type == "1D": - self.local_att = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - elif type == "2D": - self.local_att = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - else: - raise f"the type is not supported." - - self.sigmoid = nn.Sigmoid() - - def forward(self, x, residual): - flag = False - xa = x + residual - if xa.size(0) == 1: - xa = torch.cat([xa, xa], dim=0) - flag = True - xl = self.local_att(xa) - xg = self.global_att(xa) - xlg = xl + xg - wei = self.sigmoid(xlg) - xo = 2 * x * wei + 2 * residual * (1 - wei) - if flag: - xo = xo[0].unsqueeze(0) - return xo diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/utils/face_restoration_helper.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/utils/face_restoration_helper.py deleted file mode 100644 index 5d3fb8f3b95ed9959610e64f6d7373ea8a56ece8..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/utils/face_restoration_helper.py +++ /dev/null @@ -1,460 +0,0 @@ -import cv2 -import numpy as np -import os -import torch -from torchvision.transforms.functional import normalize - -from facelib.detection import init_detection_model -from facelib.parsing import init_parsing_model -from facelib.utils.misc import img2tensor, imwrite, is_gray, bgr2gray - - -def get_largest_face(det_faces, h, w): - - def get_location(val, length): - if val < 0: - return 0 - elif val > length: - return length - else: - return val - - face_areas = [] - for det_face in det_faces: - left = get_location(det_face[0], w) - right = get_location(det_face[2], w) - top = get_location(det_face[1], h) - bottom = get_location(det_face[3], h) - face_area = (right - left) * (bottom - top) - face_areas.append(face_area) - largest_idx = face_areas.index(max(face_areas)) - return det_faces[largest_idx], largest_idx - - -def get_center_face(det_faces, h=0, w=0, center=None): - if center is not None: - center = np.array(center) - else: - center = np.array([w / 2, h / 2]) - center_dist = [] - for det_face in det_faces: - face_center = np.array([(det_face[0] + det_face[2]) / 2, (det_face[1] + det_face[3]) / 2]) - dist = np.linalg.norm(face_center - center) - center_dist.append(dist) - center_idx = center_dist.index(min(center_dist)) - return det_faces[center_idx], center_idx - - -class FaceRestoreHelper(object): - """Helper for the face restoration pipeline (base class).""" - - def __init__(self, - upscale_factor, - face_size=512, - crop_ratio=(1, 1), - det_model='retinaface_resnet50', - save_ext='png', - template_3points=False, - pad_blur=False, - use_parse=False, - device=None): - self.template_3points = template_3points # improve robustness - self.upscale_factor = int(upscale_factor) - # the cropped face ratio based on the square face - self.crop_ratio = crop_ratio # (h, w) - assert (self.crop_ratio[0] >= 1 and self.crop_ratio[1] >= 1), 'crop ration only supports >=1' - self.face_size = (int(face_size * self.crop_ratio[1]), int(face_size * self.crop_ratio[0])) - - if self.template_3points: - self.face_template = np.array([[192, 240], [319, 240], [257, 371]]) - else: - # standard 5 landmarks for FFHQ faces with 512 x 512 - # facexlib - self.face_template = np.array([[192.98138, 239.94708], [318.90277, 240.1936], [256.63416, 314.01935], - [201.26117, 371.41043], [313.08905, 371.15118]]) - - # dlib: left_eye: 36:41 right_eye: 42:47 nose: 30,32,33,34 left mouth corner: 48 right mouth corner: 54 - # self.face_template = np.array([[193.65928, 242.98541], [318.32558, 243.06108], [255.67984, 328.82894], - # [198.22603, 372.82502], [313.91018, 372.75659]]) - - - self.face_template = self.face_template * (face_size / 512.0) - if self.crop_ratio[0] > 1: - self.face_template[:, 1] += face_size * (self.crop_ratio[0] - 1) / 2 - if self.crop_ratio[1] > 1: - self.face_template[:, 0] += face_size * (self.crop_ratio[1] - 1) / 2 - self.save_ext = save_ext - self.pad_blur = pad_blur - if self.pad_blur is True: - self.template_3points = False - - self.all_landmarks_5 = [] - self.det_faces = [] - self.affine_matrices = [] - self.inverse_affine_matrices = [] - self.cropped_faces = [] - self.restored_faces = [] - self.pad_input_imgs = [] - - if device is None: - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - else: - self.device = device - - # init face detection model - self.face_det = init_detection_model(det_model, half=False, device=self.device) - - # init face parsing model - self.use_parse = use_parse - self.face_parse = init_parsing_model(model_name='parsenet', device=self.device) - - def set_upscale_factor(self, upscale_factor): - self.upscale_factor = upscale_factor - - def read_image(self, img): - """img can be image path or cv2 loaded image.""" - # self.input_img is Numpy array, (h, w, c), BGR, uint8, [0, 255] - if isinstance(img, str): - img = cv2.imread(img) - - if np.max(img) > 256: # 16-bit image - img = img / 65535 * 255 - if len(img.shape) == 2: # gray image - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - elif img.shape[2] == 4: # BGRA image with alpha channel - img = img[:, :, 0:3] - - self.input_img = img - self.is_gray = is_gray(img, threshold=5) - if self.is_gray: - print('Grayscale input: True') - - if min(self.input_img.shape[:2])<512: - f = 512.0/min(self.input_img.shape[:2]) - self.input_img = cv2.resize(self.input_img, (0,0), fx=f, fy=f, interpolation=cv2.INTER_LINEAR) - - def get_face_landmarks_5(self, - only_keep_largest=False, - only_center_face=False, - resize=None, - blur_ratio=0.01, - eye_dist_threshold=None): - if resize is None: - scale = 1 - input_img = self.input_img - else: - h, w = self.input_img.shape[0:2] - scale = resize / min(h, w) - scale = max(1, scale) # always scale up - h, w = int(h * scale), int(w * scale) - interp = cv2.INTER_AREA if scale < 1 else cv2.INTER_LINEAR - input_img = cv2.resize(self.input_img, (w, h), interpolation=interp) - - with torch.no_grad(): - bboxes = self.face_det.detect_faces(input_img) - - if bboxes is None or bboxes.shape[0] == 0: - return 0 - else: - bboxes = bboxes / scale - - for bbox in bboxes: - # remove faces with too small eye distance: side faces or too small faces - eye_dist = np.linalg.norm([bbox[6] - bbox[8], bbox[7] - bbox[9]]) - if eye_dist_threshold is not None and (eye_dist < eye_dist_threshold): - continue - - if self.template_3points: - landmark = np.array([[bbox[i], bbox[i + 1]] for i in range(5, 11, 2)]) - else: - landmark = np.array([[bbox[i], bbox[i + 1]] for i in range(5, 15, 2)]) - self.all_landmarks_5.append(landmark) - self.det_faces.append(bbox[0:5]) - - if len(self.det_faces) == 0: - return 0 - if only_keep_largest: - h, w, _ = self.input_img.shape - self.det_faces, largest_idx = get_largest_face(self.det_faces, h, w) - self.all_landmarks_5 = [self.all_landmarks_5[largest_idx]] - elif only_center_face: - h, w, _ = self.input_img.shape - self.det_faces, center_idx = get_center_face(self.det_faces, h, w) - self.all_landmarks_5 = [self.all_landmarks_5[center_idx]] - - # pad blurry images - if self.pad_blur: - self.pad_input_imgs = [] - for landmarks in self.all_landmarks_5: - # get landmarks - eye_left = landmarks[0, :] - eye_right = landmarks[1, :] - eye_avg = (eye_left + eye_right) * 0.5 - mouth_avg = (landmarks[3, :] + landmarks[4, :]) * 0.5 - eye_to_eye = eye_right - eye_left - eye_to_mouth = mouth_avg - eye_avg - - # Get the oriented crop rectangle - # x: half width of the oriented crop rectangle - x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1] - # - np.flipud(eye_to_mouth) * [-1, 1]: rotate 90 clockwise - # norm with the hypotenuse: get the direction - x /= np.hypot(*x) # get the hypotenuse of a right triangle - rect_scale = 1.5 - x *= max(np.hypot(*eye_to_eye) * 2.0 * rect_scale, np.hypot(*eye_to_mouth) * 1.8 * rect_scale) - # y: half height of the oriented crop rectangle - y = np.flipud(x) * [-1, 1] - - # c: center - c = eye_avg + eye_to_mouth * 0.1 - # quad: (left_top, left_bottom, right_bottom, right_top) - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - # qsize: side length of the square - qsize = np.hypot(*x) * 2 - border = max(int(np.rint(qsize * 0.1)), 3) - - # get pad - # pad: (width_left, height_top, width_right, height_bottom) - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = [ - max(-pad[0] + border, 1), - max(-pad[1] + border, 1), - max(pad[2] - self.input_img.shape[0] + border, 1), - max(pad[3] - self.input_img.shape[1] + border, 1) - ] - - if max(pad) > 1: - # pad image - pad_img = np.pad(self.input_img, ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - # modify landmark coords - landmarks[:, 0] += pad[0] - landmarks[:, 1] += pad[1] - # blur pad images - h, w, _ = pad_img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], - np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], - np.float32(h - 1 - y) / pad[3])) - blur = int(qsize * blur_ratio) - if blur % 2 == 0: - blur += 1 - blur_img = cv2.boxFilter(pad_img, 0, ksize=(blur, blur)) - # blur_img = cv2.GaussianBlur(pad_img, (blur, blur), 0) - - pad_img = pad_img.astype('float32') - pad_img += (blur_img - pad_img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - pad_img += (np.median(pad_img, axis=(0, 1)) - pad_img) * np.clip(mask, 0.0, 1.0) - pad_img = np.clip(pad_img, 0, 255) # float32, [0, 255] - self.pad_input_imgs.append(pad_img) - else: - self.pad_input_imgs.append(np.copy(self.input_img)) - - return len(self.all_landmarks_5) - - def align_warp_face(self, save_cropped_path=None, border_mode='constant'): - """Align and warp faces with face template. - """ - if self.pad_blur: - assert len(self.pad_input_imgs) == len( - self.all_landmarks_5), f'Mismatched samples: {len(self.pad_input_imgs)} and {len(self.all_landmarks_5)}' - for idx, landmark in enumerate(self.all_landmarks_5): - # use 5 landmarks to get affine matrix - # use cv2.LMEDS method for the equivalence to skimage transform - # ref: https://blog.csdn.net/yichxi/article/details/115827338 - affine_matrix = cv2.estimateAffinePartial2D(landmark, self.face_template, method=cv2.LMEDS)[0] - self.affine_matrices.append(affine_matrix) - # warp and crop faces - if border_mode == 'constant': - border_mode = cv2.BORDER_CONSTANT - elif border_mode == 'reflect101': - border_mode = cv2.BORDER_REFLECT101 - elif border_mode == 'reflect': - border_mode = cv2.BORDER_REFLECT - if self.pad_blur: - input_img = self.pad_input_imgs[idx] - else: - input_img = self.input_img - cropped_face = cv2.warpAffine( - input_img, affine_matrix, self.face_size, borderMode=border_mode, borderValue=(135, 133, 132)) # gray - self.cropped_faces.append(cropped_face) - # save the cropped face - if save_cropped_path is not None: - path = os.path.splitext(save_cropped_path)[0] - save_path = f'{path}_{idx:02d}.{self.save_ext}' - imwrite(cropped_face, save_path) - - def get_inverse_affine(self, save_inverse_affine_path=None): - """Get inverse affine matrix.""" - for idx, affine_matrix in enumerate(self.affine_matrices): - inverse_affine = cv2.invertAffineTransform(affine_matrix) - inverse_affine *= self.upscale_factor - self.inverse_affine_matrices.append(inverse_affine) - # save inverse affine matrices - if save_inverse_affine_path is not None: - path, _ = os.path.splitext(save_inverse_affine_path) - save_path = f'{path}_{idx:02d}.pth' - torch.save(inverse_affine, save_path) - - - def add_restored_face(self, face): - if self.is_gray: - face = bgr2gray(face) # convert img into grayscale - self.restored_faces.append(face) - - - def paste_faces_to_input_image(self, save_path=None, upsample_img=None, draw_box=False, face_upsampler=None): - h, w, _ = self.input_img.shape - h_up, w_up = int(h * self.upscale_factor), int(w * self.upscale_factor) - - if upsample_img is None: - # simply resize the background - # upsample_img = cv2.resize(self.input_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4) - upsample_img = cv2.resize(self.input_img, (w_up, h_up), interpolation=cv2.INTER_LINEAR) - else: - upsample_img = cv2.resize(upsample_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4) - - assert len(self.restored_faces) == len( - self.inverse_affine_matrices), ('length of restored_faces and affine_matrices are different.') - - inv_mask_borders = [] - for restored_face, inverse_affine in zip(self.restored_faces, self.inverse_affine_matrices): - if face_upsampler is not None: - restored_face = face_upsampler.enhance(restored_face, outscale=self.upscale_factor)[0] - inverse_affine /= self.upscale_factor - inverse_affine[:, 2] *= self.upscale_factor - face_size = (self.face_size[0]*self.upscale_factor, self.face_size[1]*self.upscale_factor) - else: - # Add an offset to inverse affine matrix, for more precise back alignment - if self.upscale_factor > 1: - extra_offset = 0.5 * self.upscale_factor - else: - extra_offset = 0 - inverse_affine[:, 2] += extra_offset - face_size = self.face_size - inv_restored = cv2.warpAffine(restored_face, inverse_affine, (w_up, h_up)) - - # if draw_box or not self.use_parse: # use square parse maps - # mask = np.ones(face_size, dtype=np.float32) - # inv_mask = cv2.warpAffine(mask, inverse_affine, (w_up, h_up)) - # # remove the black borders - # inv_mask_erosion = cv2.erode( - # inv_mask, np.ones((int(2 * self.upscale_factor), int(2 * self.upscale_factor)), np.uint8)) - # pasted_face = inv_mask_erosion[:, :, None] * inv_restored - # total_face_area = np.sum(inv_mask_erosion) # // 3 - # # add border - # if draw_box: - # h, w = face_size - # mask_border = np.ones((h, w, 3), dtype=np.float32) - # border = int(1400/np.sqrt(total_face_area)) - # mask_border[border:h-border, border:w-border,:] = 0 - # inv_mask_border = cv2.warpAffine(mask_border, inverse_affine, (w_up, h_up)) - # inv_mask_borders.append(inv_mask_border) - # if not self.use_parse: - # # compute the fusion edge based on the area of face - # w_edge = int(total_face_area**0.5) // 20 - # erosion_radius = w_edge * 2 - # inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - # blur_size = w_edge * 2 - # inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - # if len(upsample_img.shape) == 2: # upsample_img is gray image - # upsample_img = upsample_img[:, :, None] - # inv_soft_mask = inv_soft_mask[:, :, None] - - # always use square mask - mask = np.ones(face_size, dtype=np.float32) - inv_mask = cv2.warpAffine(mask, inverse_affine, (w_up, h_up)) - # remove the black borders - inv_mask_erosion = cv2.erode( - inv_mask, np.ones((int(2 * self.upscale_factor), int(2 * self.upscale_factor)), np.uint8)) - pasted_face = inv_mask_erosion[:, :, None] * inv_restored - total_face_area = np.sum(inv_mask_erosion) # // 3 - # add border - if draw_box: - h, w = face_size - mask_border = np.ones((h, w, 3), dtype=np.float32) - border = int(1400/np.sqrt(total_face_area)) - mask_border[border:h-border, border:w-border,:] = 0 - inv_mask_border = cv2.warpAffine(mask_border, inverse_affine, (w_up, h_up)) - inv_mask_borders.append(inv_mask_border) - # compute the fusion edge based on the area of face - w_edge = int(total_face_area**0.5) // 20 - erosion_radius = w_edge * 2 - inv_mask_center = cv2.erode(inv_mask_erosion, np.ones((erosion_radius, erosion_radius), np.uint8)) - blur_size = w_edge * 2 - inv_soft_mask = cv2.GaussianBlur(inv_mask_center, (blur_size + 1, blur_size + 1), 0) - if len(upsample_img.shape) == 2: # upsample_img is gray image - upsample_img = upsample_img[:, :, None] - inv_soft_mask = inv_soft_mask[:, :, None] - - # parse mask - if self.use_parse: - # inference - face_input = cv2.resize(restored_face, (512, 512), interpolation=cv2.INTER_LINEAR) - face_input = img2tensor(face_input.astype('float32') / 255., bgr2rgb=True, float32=True) - normalize(face_input, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - face_input = torch.unsqueeze(face_input, 0).to(self.device) - with torch.no_grad(): - out = self.face_parse(face_input)[0] - out = out.argmax(dim=1).squeeze().cpu().numpy() - - parse_mask = np.zeros(out.shape) - MASK_COLORMAP = [0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 255, 0, 0, 0] - for idx, color in enumerate(MASK_COLORMAP): - parse_mask[out == idx] = color - # blur the mask - parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11) - parse_mask = cv2.GaussianBlur(parse_mask, (101, 101), 11) - # remove the black borders - thres = 10 - parse_mask[:thres, :] = 0 - parse_mask[-thres:, :] = 0 - parse_mask[:, :thres] = 0 - parse_mask[:, -thres:] = 0 - parse_mask = parse_mask / 255. - - parse_mask = cv2.resize(parse_mask, face_size) - parse_mask = cv2.warpAffine(parse_mask, inverse_affine, (w_up, h_up), flags=3) - inv_soft_parse_mask = parse_mask[:, :, None] - # pasted_face = inv_restored - fuse_mask = (inv_soft_parse_mask 256: # 16-bit image - upsample_img = upsample_img.astype(np.uint16) - else: - upsample_img = upsample_img.astype(np.uint8) - - # draw bounding box - if draw_box: - # upsample_input_img = cv2.resize(input_img, (w_up, h_up)) - img_color = np.ones([*upsample_img.shape], dtype=np.float32) - img_color[:,:,0] = 0 - img_color[:,:,1] = 255 - img_color[:,:,2] = 0 - for inv_mask_border in inv_mask_borders: - upsample_img = inv_mask_border * img_color + (1 - inv_mask_border) * upsample_img - # upsample_input_img = inv_mask_border * img_color + (1 - inv_mask_border) * upsample_input_img - - if save_path is not None: - path = os.path.splitext(save_path)[0] - save_path = f'{path}.{self.save_ext}' - imwrite(upsample_img, save_path) - return upsample_img - - def clean_all(self): - self.all_landmarks_5 = [] - self.restored_faces = [] - self.affine_matrices = [] - self.cropped_faces = [] - self.inverse_affine_matrices = [] - self.det_faces = [] - self.pad_input_imgs = [] \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/loggingTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/loggingTools.py deleted file mode 100644 index 78704f5a9aa4811db98aa3132ed3f12ee0853ee2..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/loggingTools.py +++ /dev/null @@ -1,543 +0,0 @@ -import sys -import logging -import timeit -from functools import wraps -from collections.abc import Mapping, Callable -import warnings -from logging import PercentStyle - - -# default logging level used by Timer class -TIME_LEVEL = logging.DEBUG - -# per-level format strings used by the default formatter -# (the level name is not printed for INFO and DEBUG messages) -DEFAULT_FORMATS = { - "*": "%(levelname)s: %(message)s", - "INFO": "%(message)s", - "DEBUG": "%(message)s", -} - - -class LevelFormatter(logging.Formatter): - """Log formatter with level-specific formatting. - - Formatter class which optionally takes a dict of logging levels to - format strings, allowing to customise the log records appearance for - specific levels. - - - Attributes: - fmt: A dictionary mapping logging levels to format strings. - The ``*`` key identifies the default format string. - datefmt: As per py:class:`logging.Formatter` - style: As per py:class:`logging.Formatter` - - >>> import sys - >>> handler = logging.StreamHandler(sys.stdout) - >>> formatter = LevelFormatter( - ... fmt={ - ... '*': '[%(levelname)s] %(message)s', - ... 'DEBUG': '%(name)s [%(levelname)s] %(message)s', - ... 'INFO': '%(message)s', - ... }) - >>> handler.setFormatter(formatter) - >>> log = logging.getLogger('test') - >>> log.setLevel(logging.DEBUG) - >>> log.addHandler(handler) - >>> log.debug('this uses a custom format string') - test [DEBUG] this uses a custom format string - >>> log.info('this also uses a custom format string') - this also uses a custom format string - >>> log.warning("this one uses the default format string") - [WARNING] this one uses the default format string - """ - - def __init__(self, fmt=None, datefmt=None, style="%"): - if style != "%": - raise ValueError( - "only '%' percent style is supported in both python 2 and 3" - ) - if fmt is None: - fmt = DEFAULT_FORMATS - if isinstance(fmt, str): - default_format = fmt - custom_formats = {} - elif isinstance(fmt, Mapping): - custom_formats = dict(fmt) - default_format = custom_formats.pop("*", None) - else: - raise TypeError("fmt must be a str or a dict of str: %r" % fmt) - super(LevelFormatter, self).__init__(default_format, datefmt) - self.default_format = self._fmt - self.custom_formats = {} - for level, fmt in custom_formats.items(): - level = logging._checkLevel(level) - self.custom_formats[level] = fmt - - def format(self, record): - if self.custom_formats: - fmt = self.custom_formats.get(record.levelno, self.default_format) - if self._fmt != fmt: - self._fmt = fmt - # for python >= 3.2, _style needs to be set if _fmt changes - if PercentStyle: - self._style = PercentStyle(fmt) - return super(LevelFormatter, self).format(record) - - -def configLogger(**kwargs): - """A more sophisticated logging system configuation manager. - - This is more or less the same as :py:func:`logging.basicConfig`, - with some additional options and defaults. - - The default behaviour is to create a ``StreamHandler`` which writes to - sys.stderr, set a formatter using the ``DEFAULT_FORMATS`` strings, and add - the handler to the top-level library logger ("fontTools"). - - A number of optional keyword arguments may be specified, which can alter - the default behaviour. - - Args: - - logger: Specifies the logger name or a Logger instance to be - configured. (Defaults to "fontTools" logger). Unlike ``basicConfig``, - this function can be called multiple times to reconfigure a logger. - If the logger or any of its children already exists before the call is - made, they will be reset before the new configuration is applied. - filename: Specifies that a ``FileHandler`` be created, using the - specified filename, rather than a ``StreamHandler``. - filemode: Specifies the mode to open the file, if filename is - specified. (If filemode is unspecified, it defaults to ``a``). - format: Use the specified format string for the handler. This - argument also accepts a dictionary of format strings keyed by - level name, to allow customising the records appearance for - specific levels. The special ``'*'`` key is for 'any other' level. - datefmt: Use the specified date/time format. - level: Set the logger level to the specified level. - stream: Use the specified stream to initialize the StreamHandler. Note - that this argument is incompatible with ``filename`` - if both - are present, ``stream`` is ignored. - handlers: If specified, this should be an iterable of already created - handlers, which will be added to the logger. Any handler in the - list which does not have a formatter assigned will be assigned the - formatter created in this function. - filters: If specified, this should be an iterable of already created - filters. If the ``handlers`` do not already have filters assigned, - these filters will be added to them. - propagate: All loggers have a ``propagate`` attribute which determines - whether to continue searching for handlers up the logging hierarchy. - If not provided, the "propagate" attribute will be set to ``False``. - """ - # using kwargs to enforce keyword-only arguments in py2. - handlers = kwargs.pop("handlers", None) - if handlers is None: - if "stream" in kwargs and "filename" in kwargs: - raise ValueError( - "'stream' and 'filename' should not be " "specified together" - ) - else: - if "stream" in kwargs or "filename" in kwargs: - raise ValueError( - "'stream' or 'filename' should not be " - "specified together with 'handlers'" - ) - if handlers is None: - filename = kwargs.pop("filename", None) - mode = kwargs.pop("filemode", "a") - if filename: - h = logging.FileHandler(filename, mode) - else: - stream = kwargs.pop("stream", None) - h = logging.StreamHandler(stream) - handlers = [h] - # By default, the top-level library logger is configured. - logger = kwargs.pop("logger", "fontTools") - if not logger or isinstance(logger, str): - # empty "" or None means the 'root' logger - logger = logging.getLogger(logger) - # before (re)configuring, reset named logger and its children (if exist) - _resetExistingLoggers(parent=logger.name) - # use DEFAULT_FORMATS if 'format' is None - fs = kwargs.pop("format", None) - dfs = kwargs.pop("datefmt", None) - # XXX: '%' is the only format style supported on both py2 and 3 - style = kwargs.pop("style", "%") - fmt = LevelFormatter(fs, dfs, style) - filters = kwargs.pop("filters", []) - for h in handlers: - if h.formatter is None: - h.setFormatter(fmt) - if not h.filters: - for f in filters: - h.addFilter(f) - logger.addHandler(h) - if logger.name != "root": - # stop searching up the hierarchy for handlers - logger.propagate = kwargs.pop("propagate", False) - # set a custom severity level - level = kwargs.pop("level", None) - if level is not None: - logger.setLevel(level) - if kwargs: - keys = ", ".join(kwargs.keys()) - raise ValueError("Unrecognised argument(s): %s" % keys) - - -def _resetExistingLoggers(parent="root"): - """Reset the logger named 'parent' and all its children to their initial - state, if they already exist in the current configuration. - """ - root = logging.root - # get sorted list of all existing loggers - existing = sorted(root.manager.loggerDict.keys()) - if parent == "root": - # all the existing loggers are children of 'root' - loggers_to_reset = [parent] + existing - elif parent not in existing: - # nothing to do - return - elif parent in existing: - loggers_to_reset = [parent] - # collect children, starting with the entry after parent name - i = existing.index(parent) + 1 - prefixed = parent + "." - pflen = len(prefixed) - num_existing = len(existing) - while i < num_existing: - if existing[i][:pflen] == prefixed: - loggers_to_reset.append(existing[i]) - i += 1 - for name in loggers_to_reset: - if name == "root": - root.setLevel(logging.WARNING) - for h in root.handlers[:]: - root.removeHandler(h) - for f in root.filters[:]: - root.removeFilters(f) - root.disabled = False - else: - logger = root.manager.loggerDict[name] - logger.level = logging.NOTSET - logger.handlers = [] - logger.filters = [] - logger.propagate = True - logger.disabled = False - - -class Timer(object): - """Keeps track of overall time and split/lap times. - - >>> import time - >>> timer = Timer() - >>> time.sleep(0.01) - >>> print("First lap:", timer.split()) - First lap: ... - >>> time.sleep(0.02) - >>> print("Second lap:", timer.split()) - Second lap: ... - >>> print("Overall time:", timer.time()) - Overall time: ... - - Can be used as a context manager inside with-statements. - - >>> with Timer() as t: - ... time.sleep(0.01) - >>> print("%0.3f seconds" % t.elapsed) - 0... seconds - - If initialised with a logger, it can log the elapsed time automatically - upon exiting the with-statement. - - >>> import logging - >>> log = logging.getLogger("my-fancy-timer-logger") - >>> configLogger(logger=log, level="DEBUG", format="%(message)s", stream=sys.stdout) - >>> with Timer(log, 'do something'): - ... time.sleep(0.01) - Took ... to do something - - The same Timer instance, holding a reference to a logger, can be reused - in multiple with-statements, optionally with different messages or levels. - - >>> timer = Timer(log) - >>> with timer(): - ... time.sleep(0.01) - elapsed time: ...s - >>> with timer('redo it', level=logging.INFO): - ... time.sleep(0.02) - Took ... to redo it - - It can also be used as a function decorator to log the time elapsed to run - the decorated function. - - >>> @timer() - ... def test1(): - ... time.sleep(0.01) - >>> @timer('run test 2', level=logging.INFO) - ... def test2(): - ... time.sleep(0.02) - >>> test1() - Took ... to run 'test1' - >>> test2() - Took ... to run test 2 - """ - - # timeit.default_timer choses the most accurate clock for each platform - _time = timeit.default_timer - default_msg = "elapsed time: %(time).3fs" - default_format = "Took %(time).3fs to %(msg)s" - - def __init__(self, logger=None, msg=None, level=None, start=None): - self.reset(start) - if logger is None: - for arg in ("msg", "level"): - if locals().get(arg) is not None: - raise ValueError("'%s' can't be specified without a 'logger'" % arg) - self.logger = logger - self.level = level if level is not None else TIME_LEVEL - self.msg = msg - - def reset(self, start=None): - """Reset timer to 'start_time' or the current time.""" - if start is None: - self.start = self._time() - else: - self.start = start - self.last = self.start - self.elapsed = 0.0 - - def time(self): - """Return the overall time (in seconds) since the timer started.""" - return self._time() - self.start - - def split(self): - """Split and return the lap time (in seconds) in between splits.""" - current = self._time() - self.elapsed = current - self.last - self.last = current - return self.elapsed - - def formatTime(self, msg, time): - """Format 'time' value in 'msg' and return formatted string. - If 'msg' contains a '%(time)' format string, try to use that. - Otherwise, use the predefined 'default_format'. - If 'msg' is empty or None, fall back to 'default_msg'. - """ - if not msg: - msg = self.default_msg - if msg.find("%(time)") < 0: - msg = self.default_format % {"msg": msg, "time": time} - else: - try: - msg = msg % {"time": time} - except (KeyError, ValueError): - pass # skip if the format string is malformed - return msg - - def __enter__(self): - """Start a new lap""" - self.last = self._time() - self.elapsed = 0.0 - return self - - def __exit__(self, exc_type, exc_value, traceback): - """End the current lap. If timer has a logger, log the time elapsed, - using the format string in self.msg (or the default one). - """ - time = self.split() - if self.logger is None or exc_type: - # if there's no logger attached, or if any exception occurred in - # the with-statement, exit without logging the time - return - message = self.formatTime(self.msg, time) - # Allow log handlers to see the individual parts to facilitate things - # like a server accumulating aggregate stats. - msg_parts = {"msg": self.msg, "time": time} - self.logger.log(self.level, message, msg_parts) - - def __call__(self, func_or_msg=None, **kwargs): - """If the first argument is a function, return a decorator which runs - the wrapped function inside Timer's context manager. - Otherwise, treat the first argument as a 'msg' string and return an updated - Timer instance, referencing the same logger. - A 'level' keyword can also be passed to override self.level. - """ - if isinstance(func_or_msg, Callable): - func = func_or_msg - # use the function name when no explicit 'msg' is provided - if not self.msg: - self.msg = "run '%s'" % func.__name__ - - @wraps(func) - def wrapper(*args, **kwds): - with self: - return func(*args, **kwds) - - return wrapper - else: - msg = func_or_msg or kwargs.get("msg") - level = kwargs.get("level", self.level) - return self.__class__(self.logger, msg, level) - - def __float__(self): - return self.elapsed - - def __int__(self): - return int(self.elapsed) - - def __str__(self): - return "%.3f" % self.elapsed - - -class ChannelsFilter(logging.Filter): - """Provides a hierarchical filter for log entries based on channel names. - - Filters out records emitted from a list of enabled channel names, - including their children. It works the same as the ``logging.Filter`` - class, but allows the user to specify multiple channel names. - - >>> import sys - >>> handler = logging.StreamHandler(sys.stdout) - >>> handler.setFormatter(logging.Formatter("%(message)s")) - >>> filter = ChannelsFilter("A.B", "C.D") - >>> handler.addFilter(filter) - >>> root = logging.getLogger() - >>> root.addHandler(handler) - >>> root.setLevel(level=logging.DEBUG) - >>> logging.getLogger('A.B').debug('this record passes through') - this record passes through - >>> logging.getLogger('A.B.C').debug('records from children also pass') - records from children also pass - >>> logging.getLogger('C.D').debug('this one as well') - this one as well - >>> logging.getLogger('A.B.').debug('also this one') - also this one - >>> logging.getLogger('A.F').debug('but this one does not!') - >>> logging.getLogger('C.DE').debug('neither this one!') - """ - - def __init__(self, *names): - self.names = names - self.num = len(names) - self.lengths = {n: len(n) for n in names} - - def filter(self, record): - if self.num == 0: - return True - for name in self.names: - nlen = self.lengths[name] - if name == record.name: - return True - elif record.name.find(name, 0, nlen) == 0 and record.name[nlen] == ".": - return True - return False - - -class CapturingLogHandler(logging.Handler): - def __init__(self, logger, level): - super(CapturingLogHandler, self).__init__(level=level) - self.records = [] - if isinstance(logger, str): - self.logger = logging.getLogger(logger) - else: - self.logger = logger - - def __enter__(self): - self.original_disabled = self.logger.disabled - self.original_level = self.logger.level - self.original_propagate = self.logger.propagate - - self.logger.addHandler(self) - self.logger.setLevel(self.level) - self.logger.disabled = False - self.logger.propagate = False - - return self - - def __exit__(self, type, value, traceback): - self.logger.removeHandler(self) - self.logger.setLevel(self.original_level) - self.logger.disabled = self.original_disabled - self.logger.propagate = self.original_propagate - - return self - - def emit(self, record): - self.records.append(record) - - def assertRegex(self, regexp, msg=None): - import re - - pattern = re.compile(regexp) - for r in self.records: - if pattern.search(r.getMessage()): - return True - if msg is None: - msg = "Pattern '%s' not found in logger records" % regexp - assert 0, msg - - -class LogMixin(object): - """Mixin class that adds logging functionality to another class. - - You can define a new class that subclasses from ``LogMixin`` as well as - other base classes through multiple inheritance. - All instances of that class will have a ``log`` property that returns - a ``logging.Logger`` named after their respective ``.``. - - For example: - - >>> class BaseClass(object): - ... pass - >>> class MyClass(LogMixin, BaseClass): - ... pass - >>> a = MyClass() - >>> isinstance(a.log, logging.Logger) - True - >>> print(a.log.name) - fontTools.misc.loggingTools.MyClass - >>> class AnotherClass(MyClass): - ... pass - >>> b = AnotherClass() - >>> isinstance(b.log, logging.Logger) - True - >>> print(b.log.name) - fontTools.misc.loggingTools.AnotherClass - """ - - @property - def log(self): - if not hasattr(self, "_log"): - name = ".".join((self.__class__.__module__, self.__class__.__name__)) - self._log = logging.getLogger(name) - return self._log - - -def deprecateArgument(name, msg, category=UserWarning): - """Raise a warning about deprecated function argument 'name'.""" - warnings.warn("%r is deprecated; %s" % (name, msg), category=category, stacklevel=3) - - -def deprecateFunction(msg, category=UserWarning): - """Decorator to raise a warning when a deprecated function is called.""" - - def decorator(func): - @wraps(func) - def wrapper(*args, **kwargs): - warnings.warn( - "%r is deprecated; %s" % (func.__name__, msg), - category=category, - stacklevel=2, - ) - return func(*args, **kwargs) - - return wrapper - - return decorator - - -if __name__ == "__main__": - import doctest - - sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-ac935314.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-ac935314.js deleted file mode 100644 index 16ddd47eb986f041fe3d62bad853e7d7071dc982..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-ac935314.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as te,e as se,s as ae,F as S,G as j,w as I,u as B,H as q,C as ie,V as oe,ae as ue,o as z,m as C,g as d,h as b,Q as fe,R as _e,r as re,v as me,k as v,I as G,P as ce,X as H,Y as M,j as K,n as X,Z as y,t as he,K as Y,p as E,x as ge,B as de}from"./index-9e76ffee.js";import{B as be}from"./Button-30a08c0b.js";import{B as ve}from"./BlockLabel-9545c6da.js";import{E as ke}from"./Empty-8e3485c0.js";import{I as p}from"./Image-953318a0.js";import{n as Z}from"./ModifyUpload.svelte_svelte_type_style_lang-14b768c9.js";function J(n,e,t){const l=n.slice();return l[27]=e[t][0],l[12]=e[t][1],l[29]=t,l}function O(n,e,t){const l=n.slice();return l[30]=e[t][0],l[12]=e[t][1],l[29]=t,l}function we(n){let e,t,l,s,i,a,r=G(n[13]?n[13][1]:[]),m=[];for(let u=0;u{_[w]=null}),me(),r=_[a],r?r.p(f,g):(r=_[a]=h[a](f),r.c()),I(r,1),r.m(i,null))},i(f){m||(I(e.$$.fragment,f),I(l.$$.fragment,f),I(r),m=!0)},o(f){B(e.$$.fragment,f),B(l.$$.fragment,f),B(r),m=!1},d(f){f&&(v(t),v(s),v(i)),q(e,f),q(l,f),_[a].d()}}}function Me(n){let e,t;return e=new be({props:{visible:n[2],elem_id:n[0],elem_classes:n[1],padding:!1,height:n[5],width:n[6],allow_overflow:!1,container:n[8],scale:n[9],min_width:n[10],$$slots:{default:[Ae]},$$scope:{ctx:n}}}),{c(){S(e.$$.fragment)},m(l,s){j(e,l,s),t=!0},p(l,s){const i={};s[0]&4&&(i.visible=l[2]),s[0]&1&&(i.elem_id=l[0]),s[0]&2&&(i.elem_classes=l[1]),s[0]&32&&(i.height=l[5]),s[0]&64&&(i.width=l[6]),s[0]&256&&(i.container=l[8]),s[0]&512&&(i.scale=l[9]),s[0]&1024&&(i.min_width=l[10]),s[0]&30904|s[1]&2&&(i.$$scope={dirty:s,ctx:l}),e.$set(i)},i(l){t||(I(e.$$.fragment,l),t=!0)},o(l){B(e.$$.fragment,l),t=!1},d(l){q(e,l)}}}function Ce(n,e,t){let{elem_id:l=""}=e,{elem_classes:s=[]}=e,{visible:i=!0}=e,{value:a}=e,r,m,{label:c="Annotated Image"}=e,{show_label:u=!0}=e,{show_legend:h=!0}=e,{height:_}=e,{width:k}=e,{color_map:f}=e,{container:g=!0}=e,{scale:D=null}=e,{min_width:A=void 0}=e,{root:w}=e,{root_url:F}=e,L=null,{loading_status:V}=e;const N=ie();function P(o){t(14,L=o)}function Q(){t(14,L=null)}const x=o=>P(o),$=o=>P(o),ee=()=>Q(),le=()=>Q(),ne=(o,R)=>N("select",{index:o,value:R});return n.$$set=o=>{"elem_id"in o&&t(0,l=o.elem_id),"elem_classes"in o&&t(1,s=o.elem_classes),"visible"in o&&t(2,i=o.visible),"value"in o&&t(18,a=o.value),"label"in o&&t(12,c=o.label),"show_label"in o&&t(3,u=o.show_label),"show_legend"in o&&t(4,h=o.show_legend),"height"in o&&t(5,_=o.height),"width"in o&&t(6,k=o.width),"color_map"in o&&t(7,f=o.color_map),"container"in o&&t(8,g=o.container),"scale"in o&&t(9,D=o.scale),"min_width"in o&&t(10,A=o.min_width),"root"in o&&t(19,w=o.root),"root_url"in o&&t(20,F=o.root_url),"loading_status"in o&&t(11,V=o.loading_status)},n.$$.update=()=>{n.$$.dirty[0]&3932160&&(a!==r&&(t(21,r=a),N("change")),a?t(13,m=[Z(a[0],w,F),a[1].map(([o,R])=>[Z(o,w,F),R])]):t(13,m=null))},[l,s,i,u,h,_,k,f,g,D,A,V,c,m,L,N,P,Q,a,w,F,r,x,$,ee,le,ne]}class Ee extends te{constructor(e){super(),se(this,e,Ce,Me,ae,{elem_id:0,elem_classes:1,visible:2,value:18,label:12,show_label:3,show_legend:4,height:5,width:6,color_map:7,container:8,scale:9,min_width:10,root:19,root_url:20,loading_status:11},null,[-1,-1])}}const Ge=Ee,He=["static"];export{Ge as Component,He as modes}; -//# sourceMappingURL=index-ac935314.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/IconButton-0ac328a0.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/IconButton-0ac328a0.js deleted file mode 100644 index fdeca4e9db12e60d0efc97d6752d95f5f4821a69..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/IconButton-0ac328a0.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as w,e as I,s as k,m,o as p,F as q,g as f,Y as b,h as g,j as _,G as v,p as S,w as j,u as B,k as h,H as C,t as E,x as F,E as G}from"./index-39fce9e2.js";import"./Button-79f6e3bf.js";function d(l){let e,i;return{c(){e=m("span"),i=E(l[1]),f(e,"class","svelte-1030q2h")},m(a,s){g(a,e,s),_(e,i)},p(a,s){s&2&&F(i,a[1])},d(a){a&&h(e)}}}function H(l){let e,i,a,s,o,c,r,n=l[2]&&d(l);return s=new l[0]({}),{c(){e=m("button"),n&&n.c(),i=p(),a=m("div"),q(s.$$.fragment),f(a,"class","svelte-1030q2h"),f(e,"aria-label",l[1]),f(e,"title",l[1]),f(e,"class","svelte-1030q2h"),b(e,"pending",l[3])},m(t,u){g(t,e,u),n&&n.m(e,null),_(e,i),_(e,a),v(s,a,null),o=!0,c||(r=S(e,"click",l[4]),c=!0)},p(t,[u]){t[2]?n?n.p(t,u):(n=d(t),n.c(),n.m(e,i)):n&&(n.d(1),n=null),(!o||u&2)&&f(e,"aria-label",t[1]),(!o||u&2)&&f(e,"title",t[1]),(!o||u&8)&&b(e,"pending",t[3])},i(t){o||(j(s.$$.fragment,t),o=!0)},o(t){B(s.$$.fragment,t),o=!1},d(t){t&&h(e),n&&n.d(),C(s),c=!1,r()}}}function Y(l,e,i){let{Icon:a}=e,{label:s=""}=e,{show_label:o=!1}=e,{pending:c=!1}=e;function r(n){G.call(this,l,n)}return l.$$set=n=>{"Icon"in n&&i(0,a=n.Icon),"label"in n&&i(1,s=n.label),"show_label"in n&&i(2,o=n.show_label),"pending"in n&&i(3,c=n.pending)},[a,s,o,c,r]}class D extends w{constructor(e){super(),I(this,e,Y,H,k,{Icon:0,label:1,show_label:2,pending:3})}}export{D as I}; -//# sourceMappingURL=IconButton-0ac328a0.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Player-1e00f554.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Player-1e00f554.css deleted file mode 100644 index a9fd7de561508b5989c623a98722cb397f7fd885..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Player-1e00f554.css +++ /dev/null @@ -1 +0,0 @@ -span.svelte-w5wajl.svelte-w5wajl{text-shadow:0 0 8px rgba(0,0,0,.5)}progress.svelte-w5wajl.svelte-w5wajl{margin-right:var(--size-3);border-radius:var(--radius-sm);width:var(--size-full);height:var(--size-2)}progress.svelte-w5wajl.svelte-w5wajl::-webkit-progress-bar{border-radius:2px;background-color:#fff3;overflow:hidden}progress.svelte-w5wajl.svelte-w5wajl::-webkit-progress-value{background-color:#ffffffe6}video.svelte-w5wajl.svelte-w5wajl{position:inherit;background-color:#000;width:var(--size-full);height:var(--size-full);object-fit:contain}.mirror.svelte-w5wajl.svelte-w5wajl{transform:scaleX(-1)}.controls.svelte-w5wajl.svelte-w5wajl{position:absolute;bottom:0;opacity:0;transition:.5s;margin:var(--size-2);border-radius:var(--radius-md);background:var(--color-grey-800);padding:var(--size-2) var(--size-1);width:calc(100% - .75rem);width:calc(100% - var(--size-2) * 2)}.wrap.svelte-w5wajl:hover .controls.svelte-w5wajl{opacity:1}.inner.svelte-w5wajl.svelte-w5wajl{display:flex;justify-content:space-between;align-items:center;padding-right:var(--size-2);padding-left:var(--size-2);width:var(--size-full);height:var(--size-full)}.icon.svelte-w5wajl.svelte-w5wajl{display:flex;justify-content:center;cursor:pointer;width:var(--size-6);color:#fff}.time.svelte-w5wajl.svelte-w5wajl{flex-shrink:0;margin-right:var(--size-3);margin-left:var(--size-3);color:#fff;font-size:var(--text-sm);font-family:var(--font-mono)}.wrap.svelte-w5wajl.svelte-w5wajl{position:relative;background-color:var(--background-fill-secondary);height:var(--size-full);width:var(--size-full)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed872773.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed872773.js deleted file mode 100644 index 9ac05f5163716a84e89b137decc67f9437b64627..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-ed872773.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as R,e as Y,s as q,f as D,g as _,h as L,j as b,n as Z,k as M,m as p,t as N,o as C,Y as V,K as I,x as A,C as T,I as F,P as U,Z as y,p as x,F as S,G as j,w as h,u as H,H as E,V as $,ae as ee,Q as le,R as te,r as G,v as K,E as ne}from"./index-39fce9e2.js";import{B as se}from"./Button-79f6e3bf.js";import{B as ae}from"./BlockLabel-b1428685.js";import{E as ie}from"./Empty-16d6169a.js";function ce(s){let e,t;return{c(){e=D("svg"),t=D("path"),_(t,"fill","currentColor"),_(t,"d","M4 2H2v26a2 2 0 0 0 2 2h26v-2H4v-3h22v-8H4v-4h14V5H4Zm20 17v4H4v-4ZM16 7v4H4V7Z"),_(e,"xmlns","http://www.w3.org/2000/svg"),_(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),_(e,"aria-hidden","true"),_(e,"role","img"),_(e,"class","iconify iconify--carbon"),_(e,"width","100%"),_(e,"height","100%"),_(e,"preserveAspectRatio","xMidYMid meet"),_(e,"viewBox","0 0 32 32")},m(l,n){L(l,e,n),b(e,t)},p:Z,i:Z,o:Z,d(l){l&&M(e)}}}class W extends R{constructor(e){super(),Y(this,e,null,ce,q,{})}}function P(s,e,t){const l=s.slice();return l[5]=e[t],l[7]=t,l}function Q(s){let e,t=F(s[0].confidences),l=[];for(let n=0;n{n("select",{index:o,value:g.label})};return s.$$set=o=>{"value"in o&&t(0,l=o.value),"color"in o&&t(1,a=o.color),"selectable"in o&&t(2,i=o.selectable)},[l,a,i,n,f]}class re extends R{constructor(e){super(),Y(this,e,oe,fe,q,{value:0,color:1,selectable:2})}}function O(s){let e,t;return e=new ae({props:{Icon:W,label:s[5],disable:s[6]===!1}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&32&&(a.label=l[5]),n&64&&(a.disable=l[6]===!1),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function ue(s){let e,t;return e=new ie({props:{unpadded_box:!0,$$slots:{default:[de]},$$scope:{ctx:s}}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&65536&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function _e(s){let e,t;return e=new re({props:{selectable:s[11],value:s[4],color:s[3]}}),e.$on("select",s[14]),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,n){const a={};n&2048&&(a.selectable=l[11]),n&16&&(a.value=l[4]),n&8&&(a.color=l[3]),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function de(s){let e,t;return e=new W({}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function me(s){let e,t,l,n,a,i,f;const o=[s[9]];let g={};for(let c=0;c{u=null}),K());let v=n;n=B(c),n===v?m[n].p(c,d):(G(),H(m[v],1,1,()=>{m[v]=null}),K(),a=m[n],a?a.p(c,d):(a=m[n]=k[n](c),a.c()),h(a,1),a.m(i.parentNode,i))},i(c){f||(h(e.$$.fragment,c),h(u),h(a),f=!0)},o(c){H(e.$$.fragment,c),H(u),H(a),f=!1},d(c){c&&(M(t),M(l),M(i)),E(e,c),u&&u.d(c),m[n].d(c)}}}function be(s){let e,t;return e=new se({props:{test_id:"label",visible:s[2],elem_id:s[0],elem_classes:s[1],container:s[6],scale:s[7],min_width:s[8],padding:!1,$$slots:{default:[me]},$$scope:{ctx:s}}}),{c(){S(e.$$.fragment)},m(l,n){j(e,l,n),t=!0},p(l,[n]){const a={};n&4&&(a.visible=l[2]),n&1&&(a.elem_id=l[0]),n&2&&(a.elem_classes=l[1]),n&64&&(a.container=l[6]),n&128&&(a.scale=l[7]),n&256&&(a.min_width=l[8]),n&73336&&(a.$$scope={dirty:n,ctx:l}),e.$set(a)},i(l){t||(h(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){E(e,l)}}}function ge(s,e,t){let l,n,{elem_id:a=""}=e,{elem_classes:i=[]}=e,{visible:f=!0}=e,{color:o=void 0}=e,{value:g={}}=e,{label:u="Label"}=e,{container:k=!0}=e,{scale:m=null}=e,{min_width:B=void 0}=e,{loading_status:c}=e,{show_label:d=!0}=e,{selectable:w=!1}=e;const v=T();function X(r){ne.call(this,s,r)}return s.$$set=r=>{"elem_id"in r&&t(0,a=r.elem_id),"elem_classes"in r&&t(1,i=r.elem_classes),"visible"in r&&t(2,f=r.visible),"color"in r&&t(3,o=r.color),"value"in r&&t(4,g=r.value),"label"in r&&t(5,u=r.label),"container"in r&&t(6,k=r.container),"scale"in r&&t(7,m=r.scale),"min_width"in r&&t(8,B=r.min_width),"loading_status"in r&&t(9,c=r.loading_status),"show_label"in r&&t(10,d=r.show_label),"selectable"in r&&t(11,w=r.selectable)},s.$$.update=()=>{s.$$.dirty&16&&t(13,{confidences:l,label:n}=g,l,(t(12,n),t(4,g))),s.$$.dirty&12288&&v("change")},[a,i,f,o,g,u,k,m,B,c,d,w,n,l,X]}class ve extends R{constructor(e){super(),Y(this,e,ge,be,q,{elem_id:0,elem_classes:1,visible:2,color:3,value:4,label:5,container:6,scale:7,min_width:8,loading_status:9,show_label:10,selectable:11})}}const Le=ve,Me=["static"];export{Le as Component,Me as modes}; -//# sourceMappingURL=index-ed872773.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py deleted file mode 100644 index c3c7a797c3b6e7aa83291b7aa78ae2b1ff7228d8..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/file_download.py +++ /dev/null @@ -1,1631 +0,0 @@ -import copy -import fnmatch -import io -import json -import os -import re -import shutil -import stat -import tempfile -import uuid -import warnings -from contextlib import contextmanager -from dataclasses import dataclass -from functools import partial -from hashlib import sha256 -from pathlib import Path -from typing import Any, BinaryIO, Dict, Generator, Optional, Tuple, Union -from urllib.parse import quote, urlparse - -import requests -from filelock import FileLock -from requests.exceptions import ProxyError, Timeout - -from huggingface_hub import constants - -from . import __version__ # noqa: F401 # for backward compatibility -from .constants import ( - DEFAULT_REVISION, - HF_HUB_DISABLE_SYMLINKS_WARNING, - HF_HUB_ENABLE_HF_TRANSFER, - HUGGINGFACE_CO_URL_TEMPLATE, - HUGGINGFACE_HEADER_X_LINKED_ETAG, - HUGGINGFACE_HEADER_X_LINKED_SIZE, - HUGGINGFACE_HEADER_X_REPO_COMMIT, - HUGGINGFACE_HUB_CACHE, - REPO_ID_SEPARATOR, - REPO_TYPES, - REPO_TYPES_URL_PREFIXES, -) -from .utils import ( - EntryNotFoundError, - LocalEntryNotFoundError, - SoftTemporaryDirectory, - build_hf_headers, - get_fastai_version, # noqa: F401 # for backward compatibility - get_fastcore_version, # noqa: F401 # for backward compatibility - get_graphviz_version, # noqa: F401 # for backward compatibility - get_jinja_version, # noqa: F401 # for backward compatibility - get_pydot_version, # noqa: F401 # for backward compatibility - get_tf_version, # noqa: F401 # for backward compatibility - get_torch_version, # noqa: F401 # for backward compatibility - hf_raise_for_status, - http_backoff, - is_fastai_available, # noqa: F401 # for backward compatibility - is_fastcore_available, # noqa: F401 # for backward compatibility - is_graphviz_available, # noqa: F401 # for backward compatibility - is_jinja_available, # noqa: F401 # for backward compatibility - is_pydot_available, # noqa: F401 # for backward compatibility - is_tf_available, # noqa: F401 # for backward compatibility - is_torch_available, # noqa: F401 # for backward compatibility - logging, - tqdm, - validate_hf_hub_args, -) -from .utils._headers import _http_user_agent -from .utils._runtime import _PY_VERSION # noqa: F401 # for backward compatibility -from .utils._typing import HTTP_METHOD_T, Literal - - -logger = logging.get_logger(__name__) - -# Regex to get filename from a "Content-Disposition" header for CDN-served files -HEADER_FILENAME_PATTERN = re.compile(r'filename="(?P.*?)";') - - -_are_symlinks_supported_in_dir: Dict[str, bool] = {} - - -def are_symlinks_supported(cache_dir: Union[str, Path, None] = None) -> bool: - """Return whether the symlinks are supported on the machine. - - Since symlinks support can change depending on the mounted disk, we need to check - on the precise cache folder. By default, the default HF cache directory is checked. - - Args: - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - - Returns: [bool] Whether symlinks are supported in the directory. - """ - # Defaults to HF cache - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - cache_dir = str(Path(cache_dir).expanduser().resolve()) # make it unique - - # Check symlink compatibility only once (per cache directory) at first time use - if cache_dir not in _are_symlinks_supported_in_dir: - _are_symlinks_supported_in_dir[cache_dir] = True - - os.makedirs(cache_dir, exist_ok=True) - with SoftTemporaryDirectory(dir=cache_dir) as tmpdir: - src_path = Path(tmpdir) / "dummy_file_src" - src_path.touch() - dst_path = Path(tmpdir) / "dummy_file_dst" - - # Relative source path as in `_create_symlink`` - relative_src = os.path.relpath(src_path, start=os.path.dirname(dst_path)) - try: - os.symlink(relative_src, dst_path) - except OSError: - # Likely running on Windows - _are_symlinks_supported_in_dir[cache_dir] = False - - if not HF_HUB_DISABLE_SYMLINKS_WARNING: - message = ( - "`huggingface_hub` cache-system uses symlinks by default to" - " efficiently store duplicated files but your machine does not" - f" support them in {cache_dir}. Caching files will still work" - " but in a degraded version that might require more space on" - " your disk. This warning can be disabled by setting the" - " `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For" - " more details, see" - " https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations." - ) - if os.name == "nt": - message += ( - "\nTo support symlinks on Windows, you either need to" - " activate Developer Mode or to run Python as an" - " administrator. In order to see activate developer mode," - " see this article:" - " https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development" - ) - warnings.warn(message) - - return _are_symlinks_supported_in_dir[cache_dir] - - -# Return value when trying to load a file from cache but the file does not exist in the distant repo. -_CACHED_NO_EXIST = object() -_CACHED_NO_EXIST_T = Any -REGEX_COMMIT_HASH = re.compile(r"^[0-9a-f]{40}$") - - -@dataclass(frozen=True) -class HfFileMetadata: - """Data structure containing information about a file versioned on the Hub. - - Returned by [`get_hf_file_metadata`] based on a URL. - - Args: - commit_hash (`str`, *optional*): - The commit_hash related to the file. - etag (`str`, *optional*): - Etag of the file on the server. - location (`str`): - Location where to download the file. Can be a Hub url or not (CDN). - size (`size`): - Size of the file. In case of an LFS file, contains the size of the actual - LFS file, not the pointer. - """ - - commit_hash: Optional[str] - etag: Optional[str] - location: str - size: Optional[int] - - -@validate_hf_hub_args -def hf_hub_url( - repo_id: str, - filename: str, - *, - subfolder: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, -) -> str: - """Construct the URL of a file from the given information. - - The resolved address can either be a huggingface.co-hosted url, or a link to - Cloudfront (a Content Delivery Network, or CDN) for large files which are - more than a few MBs. - - Args: - repo_id (`str`): - A namespace (user or an organization) name and a repo name separated - by a `/`. - filename (`str`): - The name of the file in the repo. - subfolder (`str`, *optional*): - An optional value corresponding to a folder inside the repo. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if downloading from a dataset or space, - `None` or `"model"` if downloading from a model. Default is `None`. - revision (`str`, *optional*): - An optional Git revision id which can be a branch name, a tag, or a - commit hash. - - Example: - - ```python - >>> from huggingface_hub import hf_hub_url - - >>> hf_hub_url( - ... repo_id="julien-c/EsperBERTo-small", filename="pytorch_model.bin" - ... ) - 'https://huggingface.co/julien-c/EsperBERTo-small/resolve/main/pytorch_model.bin' - ``` - - - - Notes: - - Cloudfront is replicated over the globe so downloads are way faster for - the end user (and it also lowers our bandwidth costs). - - Cloudfront aggressively caches files by default (default TTL is 24 - hours), however this is not an issue here because we implement a - git-based versioning system on huggingface.co, which means that we store - the files on S3/Cloudfront in a content-addressable way (i.e., the file - name is its hash). Using content-addressable filenames means cache can't - ever be stale. - - In terms of client-side caching from this library, we base our caching - on the objects' entity tag (`ETag`), which is an identifier of a - specific version of a resource [1]_. An object's ETag is: its git-sha1 - if stored in git, or its sha256 if stored in git-lfs. - - - - References: - - - [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag - """ - if subfolder == "": - subfolder = None - if subfolder is not None: - filename = f"{subfolder}/{filename}" - - if repo_type not in REPO_TYPES: - raise ValueError("Invalid repo type") - - if repo_type in REPO_TYPES_URL_PREFIXES: - repo_id = REPO_TYPES_URL_PREFIXES[repo_type] + repo_id - - if revision is None: - revision = DEFAULT_REVISION - return HUGGINGFACE_CO_URL_TEMPLATE.format( - repo_id=repo_id, - revision=quote(revision, safe=""), - filename=quote(filename), - ) - - -def url_to_filename(url: str, etag: Optional[str] = None) -> str: - """Generate a local filename from a url. - - Convert `url` into a hashed filename in a reproducible way. If `etag` is - specified, append its hash to the url's, delimited by a period. If the url - ends with .h5 (Keras HDF5 weights) adds '.h5' to the name so that TF 2.0 can - identify it as a HDF5 file (see - https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1380) - - Args: - url (`str`): - The address to the file. - etag (`str`, *optional*): - The ETag of the file. - - Returns: - The generated filename. - """ - url_bytes = url.encode("utf-8") - filename = sha256(url_bytes).hexdigest() - - if etag: - etag_bytes = etag.encode("utf-8") - filename += "." + sha256(etag_bytes).hexdigest() - - if url.endswith(".h5"): - filename += ".h5" - - return filename - - -def filename_to_url( - filename, - cache_dir: Optional[str] = None, - legacy_cache_layout: bool = False, -) -> Tuple[str, str]: - """ - Return the url and etag (which may be `None`) stored for `filename`. Raise - `EnvironmentError` if `filename` or its stored metadata do not exist. - - Args: - filename (`str`): - The name of the file - cache_dir (`str`, *optional*): - The cache directory to use instead of the default one. - legacy_cache_layout (`bool`, *optional*, defaults to `False`): - If `True`, uses the legacy file cache layout i.e. just call `hf_hub_url` - then `cached_download`. This is deprecated as the new cache layout is - more powerful. - """ - if not legacy_cache_layout: - warnings.warn( - "`filename_to_url` uses the legacy way cache file layout", - FutureWarning, - ) - - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError(f"file {cache_path} not found") - - meta_path = cache_path + ".json" - if not os.path.exists(meta_path): - raise EnvironmentError(f"file {meta_path} not found") - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata["url"] - etag = metadata["etag"] - - return url, etag - - -def http_user_agent( - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - user_agent: Union[Dict, str, None] = None, -) -> str: - """Deprecated in favor of [`build_hf_headers`].""" - return _http_user_agent( - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ) - - -class OfflineModeIsEnabled(ConnectionError): - pass - - -def _raise_if_offline_mode_is_enabled(msg: Optional[str] = None): - """Raise a OfflineModeIsEnabled error (subclass of ConnectionError) if - HF_HUB_OFFLINE is True.""" - if constants.HF_HUB_OFFLINE: - raise OfflineModeIsEnabled( - "Offline mode is enabled." if msg is None else "Offline mode is enabled. " + str(msg) - ) - - -def _request_wrapper( - method: HTTP_METHOD_T, - url: str, - *, - max_retries: int = 0, - base_wait_time: float = 0.5, - max_wait_time: float = 2, - timeout: Optional[float] = 10.0, - follow_relative_redirects: bool = False, - **params, -) -> requests.Response: - """Wrapper around requests methods to add several features. - - What it does: - 1. Ensure offline mode is disabled (env variable `HF_HUB_OFFLINE` not set to 1). - If enabled, a `OfflineModeIsEnabled` exception is raised. - 2. Follow relative redirections if `follow_relative_redirects=True` even when - `allow_redirection` kwarg is set to False. - 3. Retry in case request fails with a `Timeout` or `ProxyError`, with exponential backoff. - - Args: - method (`str`): - HTTP method, such as 'GET' or 'HEAD'. - url (`str`): - The URL of the resource to fetch. - max_retries (`int`, *optional*, defaults to `0`): - Maximum number of retries, defaults to 0 (no retries). - base_wait_time (`float`, *optional*, defaults to `0.5`): - Duration (in seconds) to wait before retrying the first time. - Wait time between retries then grows exponentially, capped by - `max_wait_time`. - max_wait_time (`float`, *optional*, defaults to `2`): - Maximum amount of time between two retries, in seconds. - timeout (`float`, *optional*, defaults to `10`): - How many seconds to wait for the server to send data before - giving up which is passed to `requests.request`. - follow_relative_redirects (`bool`, *optional*, defaults to `False`) - If True, relative redirection (redirection to the same site) will be - resolved even when `allow_redirection` kwarg is set to False. Useful when we - want to follow a redirection to a renamed repository without following - redirection to a CDN. - **params (`dict`, *optional*): - Params to pass to `requests.request`. - """ - # 1. Check online mode - _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") - - # 2. Force relative redirection - if follow_relative_redirects: - response = _request_wrapper( - method=method, - url=url, - max_retries=max_retries, - base_wait_time=base_wait_time, - max_wait_time=max_wait_time, - timeout=timeout, - follow_relative_redirects=False, - **params, - ) - - # If redirection, we redirect only relative paths. - # This is useful in case of a renamed repository. - if 300 <= response.status_code <= 399: - parsed_target = urlparse(response.headers["Location"]) - if parsed_target.netloc == "": - # This means it is a relative 'location' headers, as allowed by RFC 7231. - # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource') - # We want to follow this relative redirect ! - # - # Highly inspired by `resolve_redirects` from requests library. - # See https://github.com/psf/requests/blob/main/requests/sessions.py#L159 - return _request_wrapper( - method=method, - url=urlparse(url)._replace(path=parsed_target.path).geturl(), - max_retries=max_retries, - base_wait_time=base_wait_time, - max_wait_time=max_wait_time, - timeout=timeout, - follow_relative_redirects=True, # resolve recursively - **params, - ) - return response - - # 3. Exponential backoff - return http_backoff( - method=method, - url=url, - max_retries=max_retries, - base_wait_time=base_wait_time, - max_wait_time=max_wait_time, - retry_on_exceptions=(Timeout, ProxyError), - retry_on_status_codes=(), - timeout=timeout, - **params, - ) - - -def _request_with_retry(*args, **kwargs) -> requests.Response: - """Deprecated method. Please use `_request_wrapper` instead. - - Alias to keep backward compatibility (used in Transformers). - """ - return _request_wrapper(*args, **kwargs) - - -def http_get( - url: str, - temp_file: BinaryIO, - *, - proxies=None, - resume_size: float = 0, - headers: Optional[Dict[str, str]] = None, - timeout: Optional[float] = 10.0, - max_retries: int = 0, - expected_size: Optional[int] = None, -): - """ - Download a remote file. Do not gobble up errors, and will return errors tailored to the Hugging Face Hub. - """ - if not resume_size: - if HF_HUB_ENABLE_HF_TRANSFER: - try: - # Download file using an external Rust-based package. Download is faster - # (~2x speed-up) but support less features (no progress bars). - from hf_transfer import download - - logger.debug(f"Download {url} using HF_TRANSFER.") - max_files = 100 - chunk_size = 10 * 1024 * 1024 # 10 MB - download(url, temp_file.name, max_files, chunk_size, headers=headers) - return - except ImportError: - raise ValueError( - "Fast download using 'hf_transfer' is enabled" - " (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not" - " available in your environment. Try `pip install hf_transfer`." - ) - except Exception as e: - raise RuntimeError( - "An error occurred while downloading using `hf_transfer`. Consider" - " disabling HF_HUB_ENABLE_HF_TRANSFER for better error handling." - ) from e - - headers = copy.deepcopy(headers) or {} - if resume_size > 0: - headers["Range"] = "bytes=%d-" % (resume_size,) - - r = _request_wrapper( - method="GET", - url=url, - stream=True, - proxies=proxies, - headers=headers, - timeout=timeout, - max_retries=max_retries, - ) - hf_raise_for_status(r) - content_length = r.headers.get("Content-Length") - - # NOTE: 'total' is the total number of bytes to download, not the number of bytes in the file. - # If the file is compressed, the number of bytes in the saved file will be higher than 'total'. - total = resume_size + int(content_length) if content_length is not None else None - - displayed_name = url - content_disposition = r.headers.get("Content-Disposition") - if content_disposition is not None: - match = HEADER_FILENAME_PATTERN.search(content_disposition) - if match is not None: - # Means file is on CDN - displayed_name = match.groupdict()["filename"] - - # Truncate filename if too long to display - if len(displayed_name) > 22: - displayed_name = f"(…){displayed_name[-20:]}" - - progress = tqdm( - unit="B", - unit_scale=True, - total=total, - initial=resume_size, - desc=f"Downloading {displayed_name}", - disable=bool(logger.getEffectiveLevel() == logging.NOTSET), - ) - for chunk in r.iter_content(chunk_size=10 * 1024 * 1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - - if expected_size is not None and expected_size != temp_file.tell(): - raise EnvironmentError( - f"Consistency check failed: file should be of size {expected_size} but has size" - f" {temp_file.tell()} ({displayed_name}).\nWe are sorry for the inconvenience. Please retry download and" - " pass `force_download=True, resume_download=False` as argument.\nIf the issue persists, please let us" - " know by opening an issue on https://github.com/huggingface/huggingface_hub." - ) - - progress.close() - - -@validate_hf_hub_args -def cached_download( - url: str, - *, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - cache_dir: Union[str, Path, None] = None, - user_agent: Union[Dict, str, None] = None, - force_download: bool = False, - force_filename: Optional[str] = None, - proxies: Optional[Dict] = None, - etag_timeout: float = 10, - resume_download: bool = False, - token: Union[bool, str, None] = None, - local_files_only: bool = False, - legacy_cache_layout: bool = False, -) -> str: - """ - Download from a given URL and cache it if it's not already present in the - local cache. - - Given a URL, this function looks for the corresponding file in the local - cache. If it's not there, download it. Then return the path to the cached - file. - - Will raise errors tailored to the Hugging Face Hub. - - Args: - url (`str`): - The path to the file to be downloaded. - library_name (`str`, *optional*): - The name of the library to which the object corresponds. - library_version (`str`, *optional*): - The version of the library. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - user_agent (`dict`, `str`, *optional*): - The user-agent info in the form of a dictionary or a string. - force_download (`bool`, *optional*, defaults to `False`): - Whether the file should be downloaded even if it already exists in - the local cache. - force_filename (`str`, *optional*): - Use this name instead of a generated file name. - proxies (`dict`, *optional*): - Dictionary mapping protocol to the URL of the proxy passed to - `requests.request`. - etag_timeout (`float`, *optional* defaults to `10`): - When fetching ETag, how many seconds to wait for the server to send - data before giving up which is passed to `requests.request`. - resume_download (`bool`, *optional*, defaults to `False`): - If `True`, resume a previously interrupted download. - token (`bool`, `str`, *optional*): - A token to be used for the download. - - If `True`, the token is read from the HuggingFace config - folder. - - If a string, it's used as the authentication token. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the - local cached file if it exists. - legacy_cache_layout (`bool`, *optional*, defaults to `False`): - Set this parameter to `True` to mention that you'd like to continue - the old cache layout. Putting this to `True` manually will not raise - any warning when using `cached_download`. We recommend using - `hf_hub_download` to take advantage of the new cache. - - Returns: - Local path (string) of file or if networking is off, last version of - file cached on disk. - - - - Raises the following errors: - - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if `token=True` and the token cannot be found. - - [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) - if ETag cannot be determined. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - [`~utils.EntryNotFoundError`] - If the file to download cannot be found. - - [`~utils.LocalEntryNotFoundError`] - If network is disabled or unavailable and file is not found in cache. - - - """ - if not legacy_cache_layout: - warnings.warn( - ( - "'cached_download' is the legacy way to download files from the HF hub, please consider upgrading to" - " 'hf_hub_download'" - ), - FutureWarning, - ) - - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - os.makedirs(cache_dir, exist_ok=True) - - headers = build_hf_headers( - token=token, - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ) - - url_to_download = url - etag = None - expected_size = None - if not local_files_only: - try: - # Temporary header: we want the full (decompressed) content size returned to be able to check the - # downloaded file size - headers["Accept-Encoding"] = "identity" - r = _request_wrapper( - method="HEAD", - url=url, - headers=headers, - allow_redirects=False, - follow_relative_redirects=True, - proxies=proxies, - timeout=etag_timeout, - ) - headers.pop("Accept-Encoding", None) - hf_raise_for_status(r) - etag = r.headers.get(HUGGINGFACE_HEADER_X_LINKED_ETAG) or r.headers.get("ETag") - # We favor a custom header indicating the etag of the linked resource, and - # we fallback to the regular etag header. - # If we don't have any of those, raise an error. - if etag is None: - raise OSError( - "Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility." - ) - # We get the expected size of the file, to check the download went well. - expected_size = _int_or_none(r.headers.get("Content-Length")) - # In case of a redirect, save an extra redirect on the request.get call, - # and ensure we download the exact atomic version even if it changed - # between the HEAD and the GET (unlikely, but hey). - # Useful for lfs blobs that are stored on a CDN. - if 300 <= r.status_code <= 399: - url_to_download = r.headers["Location"] - headers.pop("authorization", None) - expected_size = None # redirected -> can't know the expected size - except (requests.exceptions.SSLError, requests.exceptions.ProxyError): - # Actually raise for those subclasses of ConnectionError - raise - except ( - requests.exceptions.ConnectionError, - requests.exceptions.Timeout, - OfflineModeIsEnabled, - ): - # Otherwise, our Internet connection is down. - # etag is None - pass - - filename = force_filename if force_filename is not None else url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - # etag is None == we don't have a connection or we passed local_files_only. - # try to get the last downloaded one - if etag is None: - if os.path.exists(cache_path) and not force_download: - return cache_path - else: - matching_files = [ - file - for file in fnmatch.filter(os.listdir(cache_dir), filename.split(".")[0] + ".*") - if not file.endswith(".json") and not file.endswith(".lock") - ] - if len(matching_files) > 0 and not force_download and force_filename is None: - return os.path.join(cache_dir, matching_files[-1]) - else: - # If files cannot be found and local_files_only=True, - # the models might've been found if local_files_only=False - # Notify the user about that - if local_files_only: - raise LocalEntryNotFoundError( - "Cannot find the requested files in the cached path and" - " outgoing traffic has been disabled. To enable model look-ups" - " and downloads online, set 'local_files_only' to False." - ) - else: - raise LocalEntryNotFoundError( - "Connection error, and we cannot find the requested files in" - " the cached path. Please try again or make sure your Internet" - " connection is on." - ) - - # From now on, etag is not None. - if os.path.exists(cache_path) and not force_download: - return cache_path - - # Prevent parallel downloads of the same file with a lock. - lock_path = cache_path + ".lock" - - # Some Windows versions do not allow for paths longer than 255 characters. - # In this case, we must specify it is an extended path by using the "\\?\" prefix. - if os.name == "nt" and len(os.path.abspath(lock_path)) > 255: - lock_path = "\\\\?\\" + os.path.abspath(lock_path) - - if os.name == "nt" and len(os.path.abspath(cache_path)) > 255: - cache_path = "\\\\?\\" + os.path.abspath(cache_path) - - with FileLock(lock_path): - # If the download just completed while the lock was activated. - if os.path.exists(cache_path) and not force_download: - # Even if returning early like here, the lock will be released. - return cache_path - - if resume_download: - incomplete_path = cache_path + ".incomplete" - - @contextmanager - def _resumable_file_manager() -> Generator[io.BufferedWriter, None, None]: - with open(incomplete_path, "ab") as f: - yield f - - temp_file_manager = _resumable_file_manager - if os.path.exists(incomplete_path): - resume_size = os.stat(incomplete_path).st_size - else: - resume_size = 0 - else: - temp_file_manager = partial( # type: ignore - tempfile.NamedTemporaryFile, mode="wb", dir=cache_dir, delete=False - ) - resume_size = 0 - - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with temp_file_manager() as temp_file: - logger.info("downloading %s to %s", url, temp_file.name) - - http_get( - url_to_download, - temp_file, - proxies=proxies, - resume_size=resume_size, - headers=headers, - expected_size=expected_size, - ) - - logger.info("storing %s in cache at %s", url, cache_path) - _chmod_and_replace(temp_file.name, cache_path) - - if force_filename is None: - logger.info("creating metadata file for %s", cache_path) - meta = {"url": url, "etag": etag} - meta_path = cache_path + ".json" - with open(meta_path, "w") as meta_file: - json.dump(meta, meta_file) - - return cache_path - - -def _normalize_etag(etag: Optional[str]) -> Optional[str]: - """Normalize ETag HTTP header, so it can be used to create nice filepaths. - - The HTTP spec allows two forms of ETag: - ETag: W/"" - ETag: "" - - For now, we only expect the second form from the server, but we want to be future-proof so we support both. For - more context, see `TestNormalizeEtag` tests and https://github.com/huggingface/huggingface_hub/pull/1428. - - Args: - etag (`str`, *optional*): HTTP header - - Returns: - `str` or `None`: string that can be used as a nice directory name. - Returns `None` if input is None. - """ - if etag is None: - return None - return etag.lstrip("W/").strip('"') - - -def _create_relative_symlink(src: str, dst: str, new_blob: bool = False) -> None: - """Alias method used in `transformers` conversion script.""" - return _create_symlink(src=src, dst=dst, new_blob=new_blob) - - -def _create_symlink(src: str, dst: str, new_blob: bool = False) -> None: - """Create a symbolic link named dst pointing to src. - - By default, it will try to create a symlink using a relative path. Relative paths have 2 advantages: - - If the cache_folder is moved (example: back-up on a shared drive), relative paths within the cache folder will - not brake. - - Relative paths seems to be better handled on Windows. Issue was reported 3 times in less than a week when - changing from relative to absolute paths. See https://github.com/huggingface/huggingface_hub/issues/1398, - https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228. - NOTE: The issue with absolute paths doesn't happen on admin mode. - When creating a symlink from the cache to a local folder, it is possible that a relative path cannot be created. - This happens when paths are not on the same volume. In that case, we use absolute paths. - - - The result layout looks something like - └── [ 128] snapshots - ├── [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f - │ ├── [ 52] README.md -> ../../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812 - │ └── [ 76] pytorch_model.bin -> ../../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd - - If symlinks cannot be created on this platform (most likely to be Windows), the workaround is to avoid symlinks by - having the actual file in `dst`. If it is a new file (`new_blob=True`), we move it to `dst`. If it is not a new file - (`new_blob=False`), we don't know if the blob file is already referenced elsewhere. To avoid breaking existing - cache, the file is duplicated on the disk. - - In case symlinks are not supported, a warning message is displayed to the user once when loading `huggingface_hub`. - The warning message can be disable with the `DISABLE_SYMLINKS_WARNING` environment variable. - """ - try: - os.remove(dst) - except OSError: - pass - - abs_src = os.path.abspath(os.path.expanduser(src)) - abs_dst = os.path.abspath(os.path.expanduser(dst)) - - # Use relative_dst in priority - try: - relative_src = os.path.relpath(abs_src, os.path.dirname(abs_dst)) - except ValueError: - # Raised on Windows if src and dst are not on the same volume. This is the case when creating a symlink to a - # local_dir instead of within the cache directory. - # See https://docs.python.org/3/library/os.path.html#os.path.relpath - relative_src = None - - try: - try: - commonpath = os.path.commonpath([abs_src, abs_dst]) - _support_symlinks = are_symlinks_supported(os.path.dirname(commonpath)) - except ValueError: - # Raised if src and dst are not on the same volume. Symlinks will still work on Linux/Macos. - # See https://docs.python.org/3/library/os.path.html#os.path.commonpath - _support_symlinks = os.name != "nt" - except PermissionError: - # Permission error means src and dst are not in the same volume (e.g. destination path has been provided - # by the user via `local_dir`. Let's test symlink support there) - _support_symlinks = are_symlinks_supported(os.path.dirname(abs_dst)) - - if _support_symlinks: - src_rel_or_abs = relative_src or abs_src - logger.info(f"Creating pointer from {src_rel_or_abs} to {abs_dst}") - try: - os.symlink(src_rel_or_abs, abs_dst) - except FileExistsError: - if os.path.islink(abs_dst) and os.path.realpath(abs_dst) == os.path.realpath(abs_src): - # `abs_dst` already exists and is a symlink to the `abs_src` blob. It is most likely that the file has - # been cached twice concurrently (exactly between `os.remove` and `os.symlink`). Do nothing. - pass - else: - # Very unlikely to happen. Means a file `dst` has been created exactly between `os.remove` and - # `os.symlink` and is not a symlink to the `abs_src` blob file. Raise exception. - raise - elif new_blob: - logger.info(f"Symlink not supported. Moving file from {abs_src} to {abs_dst}") - shutil.move(src, dst) - else: - logger.info(f"Symlink not supported. Copying file from {abs_src} to {abs_dst}") - shutil.copyfile(src, dst) - - -def _cache_commit_hash_for_specific_revision(storage_folder: str, revision: str, commit_hash: str) -> None: - """Cache reference between a revision (tag, branch or truncated commit hash) and the corresponding commit hash. - - Does nothing if `revision` is already a proper `commit_hash` or reference is already cached. - """ - if revision != commit_hash: - ref_path = Path(storage_folder) / "refs" / revision - ref_path.parent.mkdir(parents=True, exist_ok=True) - if not ref_path.exists() or commit_hash != ref_path.read_text(): - # Update ref only if has been updated. Could cause useless error in case - # repo is already cached and user doesn't have write access to cache folder. - # See https://github.com/huggingface/huggingface_hub/issues/1216. - ref_path.write_text(commit_hash) - - -@validate_hf_hub_args -def repo_folder_name(*, repo_id: str, repo_type: str) -> str: - """Return a serialized version of a hf.co repo name and type, safe for disk storage - as a single non-nested folder. - - Example: models--julien-c--EsperBERTo-small - """ - # remove all `/` occurrences to correctly convert repo to directory name - parts = [f"{repo_type}s", *repo_id.split("/")] - return REPO_ID_SEPARATOR.join(parts) - - -@validate_hf_hub_args -def hf_hub_download( - repo_id: str, - filename: str, - *, - subfolder: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - library_name: Optional[str] = None, - library_version: Optional[str] = None, - cache_dir: Union[str, Path, None] = None, - local_dir: Union[str, Path, None] = None, - local_dir_use_symlinks: Union[bool, Literal["auto"]] = "auto", - user_agent: Union[Dict, str, None] = None, - force_download: bool = False, - force_filename: Optional[str] = None, - proxies: Optional[Dict] = None, - etag_timeout: float = 10, - resume_download: bool = False, - token: Union[bool, str, None] = None, - local_files_only: bool = False, - legacy_cache_layout: bool = False, -) -> str: - """Download a given file if it's not already present in the local cache. - - The new cache file layout looks like this: - - The cache directory contains one subfolder per repo_id (namespaced by repo type) - - inside each repo folder: - - refs is a list of the latest known revision => commit_hash pairs - - blobs contains the actual file blobs (identified by their git-sha or sha256, depending on - whether they're LFS files or not) - - snapshots contains one subfolder per commit, each "commit" contains the subset of the files - that have been resolved at that particular commit. Each filename is a symlink to the blob - at that particular commit. - - If `local_dir` is provided, the file structure from the repo will be replicated in this location. You can configure - how you want to move those files: - - If `local_dir_use_symlinks="auto"` (default), files are downloaded and stored in the cache directory as blob - files. Small files (<5MB) are duplicated in `local_dir` while a symlink is created for bigger files. The goal - is to be able to manually edit and save small files without corrupting the cache while saving disk space for - binary files. The 5MB threshold can be configured with the `HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD` - environment variable. - - If `local_dir_use_symlinks=True`, files are downloaded, stored in the cache directory and symlinked in `local_dir`. - This is optimal in term of disk usage but files must not be manually edited. - - If `local_dir_use_symlinks=False` and the blob files exist in the cache directory, they are duplicated in the - local dir. This means disk usage is not optimized. - - Finally, if `local_dir_use_symlinks=False` and the blob files do not exist in the cache directory, then the - files are downloaded and directly placed under `local_dir`. This means if you need to download them again later, - they will be re-downloaded entirely. - - ``` - [ 96] . - └── [ 160] models--julien-c--EsperBERTo-small - ├── [ 160] blobs - │ ├── [321M] 403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd - │ ├── [ 398] 7cb18dc9bafbfcf74629a4b760af1b160957a83e - │ └── [1.4K] d7edf6bd2a681fb0175f7735299831ee1b22b812 - ├── [ 96] refs - │ └── [ 40] main - └── [ 128] snapshots - ├── [ 128] 2439f60ef33a0d46d85da5001d52aeda5b00ce9f - │ ├── [ 52] README.md -> ../../blobs/d7edf6bd2a681fb0175f7735299831ee1b22b812 - │ └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd - └── [ 128] bbc77c8132af1cc5cf678da3f1ddf2de43606d48 - ├── [ 52] README.md -> ../../blobs/7cb18dc9bafbfcf74629a4b760af1b160957a83e - └── [ 76] pytorch_model.bin -> ../../blobs/403450e234d65943a7dcf7e05a771ce3c92faa84dd07db4ac20f592037a1e4bd - ``` - - Args: - repo_id (`str`): - A user or an organization name and a repo name separated by a `/`. - filename (`str`): - The name of the file in the repo. - subfolder (`str`, *optional*): - An optional value corresponding to a folder inside the model repo. - repo_type (`str`, *optional*): - Set to `"dataset"` or `"space"` if downloading from a dataset or space, - `None` or `"model"` if downloading from a model. Default is `None`. - revision (`str`, *optional*): - An optional Git revision id which can be a branch name, a tag, or a - commit hash. - library_name (`str`, *optional*): - The name of the library to which the object corresponds. - library_version (`str`, *optional*): - The version of the library. - cache_dir (`str`, `Path`, *optional*): - Path to the folder where cached files are stored. - local_dir (`str` or `Path`, *optional*): - If provided, the downloaded file will be placed under this directory, either as a symlink (default) or - a regular file (see description for more details). - local_dir_use_symlinks (`"auto"` or `bool`, defaults to `"auto"`): - To be used with `local_dir`. If set to "auto", the cache directory will be used and the file will be either - duplicated or symlinked to the local directory depending on its size. It set to `True`, a symlink will be - created, no matter the file size. If set to `False`, the file will either be duplicated from cache (if - already exists) or downloaded from the Hub and not cached. See description for more details. - user_agent (`dict`, `str`, *optional*): - The user-agent info in the form of a dictionary or a string. - force_download (`bool`, *optional*, defaults to `False`): - Whether the file should be downloaded even if it already exists in - the local cache. - proxies (`dict`, *optional*): - Dictionary mapping protocol to the URL of the proxy passed to - `requests.request`. - etag_timeout (`float`, *optional*, defaults to `10`): - When fetching ETag, how many seconds to wait for the server to send - data before giving up which is passed to `requests.request`. - resume_download (`bool`, *optional*, defaults to `False`): - If `True`, resume a previously interrupted download. - token (`str`, `bool`, *optional*): - A token to be used for the download. - - If `True`, the token is read from the HuggingFace config - folder. - - If a string, it's used as the authentication token. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, avoid downloading the file and return the path to the - local cached file if it exists. - legacy_cache_layout (`bool`, *optional*, defaults to `False`): - If `True`, uses the legacy file cache layout i.e. just call [`hf_hub_url`] - then `cached_download`. This is deprecated as the new cache layout is - more powerful. - - Returns: - Local path (string) of file or if networking is off, last version of - file cached on disk. - - - - Raises the following errors: - - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if `token=True` and the token cannot be found. - - [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) - if ETag cannot be determined. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if some parameter value is invalid - - [`~utils.RepositoryNotFoundError`] - If the repository to download from cannot be found. This may be because it doesn't exist, - or because it is set to `private` and you do not have access. - - [`~utils.RevisionNotFoundError`] - If the revision to download from cannot be found. - - [`~utils.EntryNotFoundError`] - If the file to download cannot be found. - - [`~utils.LocalEntryNotFoundError`] - If network is disabled or unavailable and file is not found in cache. - - - """ - if force_filename is not None: - warnings.warn( - ( - "The `force_filename` parameter is deprecated as a new caching system, " - "which keeps the filenames as they are on the Hub, is now in place." - ), - FutureWarning, - ) - legacy_cache_layout = True - - if legacy_cache_layout: - url = hf_hub_url( - repo_id, - filename, - subfolder=subfolder, - repo_type=repo_type, - revision=revision, - ) - - return cached_download( - url, - library_name=library_name, - library_version=library_version, - cache_dir=cache_dir, - user_agent=user_agent, - force_download=force_download, - force_filename=force_filename, - proxies=proxies, - etag_timeout=etag_timeout, - resume_download=resume_download, - token=token, - local_files_only=local_files_only, - legacy_cache_layout=legacy_cache_layout, - ) - - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - if revision is None: - revision = DEFAULT_REVISION - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - if isinstance(local_dir, Path): - local_dir = str(local_dir) - - if subfolder == "": - subfolder = None - if subfolder is not None: - # This is used to create a URL, and not a local path, hence the forward slash. - filename = f"{subfolder}/{filename}" - - if repo_type is None: - repo_type = "model" - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type: {repo_type}. Accepted repo types are: {str(REPO_TYPES)}") - - storage_folder = os.path.join(cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type)) - os.makedirs(storage_folder, exist_ok=True) - - # cross platform transcription of filename, to be used as a local file path. - relative_filename = os.path.join(*filename.split("/")) - if os.name == "nt": - if relative_filename.startswith("..\\") or "\\..\\" in relative_filename: - raise ValueError( - f"Invalid filename: cannot handle filename '{relative_filename}' on Windows. Please ask the repository" - " owner to rename this file." - ) - - # if user provides a commit_hash and they already have the file on disk, - # shortcut everything. - if REGEX_COMMIT_HASH.match(revision): - pointer_path = _get_pointer_path(storage_folder, revision, relative_filename) - if os.path.exists(pointer_path): - if local_dir is not None: - return _to_local_dir(pointer_path, local_dir, relative_filename, use_symlinks=local_dir_use_symlinks) - return pointer_path - - url = hf_hub_url(repo_id, filename, repo_type=repo_type, revision=revision) - - headers = build_hf_headers( - token=token, - library_name=library_name, - library_version=library_version, - user_agent=user_agent, - ) - - url_to_download = url - etag = None - commit_hash = None - expected_size = None - if not local_files_only: - try: - try: - metadata = get_hf_file_metadata( - url=url, - token=token, - proxies=proxies, - timeout=etag_timeout, - ) - except EntryNotFoundError as http_error: - # Cache the non-existence of the file and raise - commit_hash = http_error.response.headers.get(HUGGINGFACE_HEADER_X_REPO_COMMIT) - if commit_hash is not None and not legacy_cache_layout: - no_exist_file_path = Path(storage_folder) / ".no_exist" / commit_hash / relative_filename - no_exist_file_path.parent.mkdir(parents=True, exist_ok=True) - no_exist_file_path.touch() - _cache_commit_hash_for_specific_revision(storage_folder, revision, commit_hash) - raise - - # Commit hash must exist - commit_hash = metadata.commit_hash - if commit_hash is None: - raise OSError("Distant resource does not seem to be on huggingface.co (missing commit header).") - - # Etag must exist - etag = metadata.etag - # We favor a custom header indicating the etag of the linked resource, and - # we fallback to the regular etag header. - # If we don't have any of those, raise an error. - if etag is None: - raise OSError( - "Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility." - ) - - # Expected (uncompressed) size - expected_size = metadata.size - - # In case of a redirect, save an extra redirect on the request.get call, - # and ensure we download the exact atomic version even if it changed - # between the HEAD and the GET (unlikely, but hey). - # Useful for lfs blobs that are stored on a CDN. - if metadata.location != url: - url_to_download = metadata.location - # Remove authorization header when downloading a LFS blob - headers.pop("authorization", None) - except (requests.exceptions.SSLError, requests.exceptions.ProxyError): - # Actually raise for those subclasses of ConnectionError - raise - except ( - requests.exceptions.ConnectionError, - requests.exceptions.Timeout, - OfflineModeIsEnabled, - ): - # Otherwise, our Internet connection is down. - # etag is None - pass - - # etag is None == we don't have a connection or we passed local_files_only. - # try to get the last downloaded one from the specified revision. - # If the specified revision is a commit hash, look inside "snapshots". - # If the specified revision is a branch or tag, look inside "refs". - if etag is None: - # In those cases, we cannot force download. - if force_download: - raise ValueError( - "We have no connection or you passed local_files_only, so force_download is not an accepted option." - ) - - # Try to get "commit_hash" from "revision" - commit_hash = None - if REGEX_COMMIT_HASH.match(revision): - commit_hash = revision - else: - ref_path = os.path.join(storage_folder, "refs", revision) - if os.path.isfile(ref_path): - with open(ref_path) as f: - commit_hash = f.read() - - # Return pointer file if exists - if commit_hash is not None: - pointer_path = _get_pointer_path(storage_folder, commit_hash, relative_filename) - if os.path.exists(pointer_path): - if local_dir is not None: - return _to_local_dir( - pointer_path, local_dir, relative_filename, use_symlinks=local_dir_use_symlinks - ) - return pointer_path - - # If we couldn't find an appropriate file on disk, raise an error. - # If files cannot be found and local_files_only=True, - # the models might've been found if local_files_only=False - # Notify the user about that - if local_files_only: - raise LocalEntryNotFoundError( - "Cannot find the requested files in the disk cache and" - " outgoing traffic has been disabled. To enable hf.co look-ups" - " and downloads online, set 'local_files_only' to False." - ) - else: - raise LocalEntryNotFoundError( - "Connection error, and we cannot find the requested files in" - " the disk cache. Please try again or make sure your Internet" - " connection is on." - ) - - # From now on, etag and commit_hash are not None. - assert etag is not None, "etag must have been retrieved from server" - assert commit_hash is not None, "commit_hash must have been retrieved from server" - blob_path = os.path.join(storage_folder, "blobs", etag) - pointer_path = _get_pointer_path(storage_folder, commit_hash, relative_filename) - - os.makedirs(os.path.dirname(blob_path), exist_ok=True) - os.makedirs(os.path.dirname(pointer_path), exist_ok=True) - # if passed revision is not identical to commit_hash - # then revision has to be a branch name or tag name. - # In that case store a ref. - _cache_commit_hash_for_specific_revision(storage_folder, revision, commit_hash) - - if os.path.exists(pointer_path) and not force_download: - if local_dir is not None: - return _to_local_dir(pointer_path, local_dir, relative_filename, use_symlinks=local_dir_use_symlinks) - return pointer_path - - if os.path.exists(blob_path) and not force_download: - # we have the blob already, but not the pointer - if local_dir is not None: # to local dir - return _to_local_dir(blob_path, local_dir, relative_filename, use_symlinks=local_dir_use_symlinks) - else: # or in snapshot cache - _create_symlink(blob_path, pointer_path, new_blob=False) - return pointer_path - - # Prevent parallel downloads of the same file with a lock. - lock_path = blob_path + ".lock" - - # Some Windows versions do not allow for paths longer than 255 characters. - # In this case, we must specify it is an extended path by using the "\\?\" prefix. - if os.name == "nt" and len(os.path.abspath(lock_path)) > 255: - lock_path = "\\\\?\\" + os.path.abspath(lock_path) - - if os.name == "nt" and len(os.path.abspath(blob_path)) > 255: - blob_path = "\\\\?\\" + os.path.abspath(blob_path) - - with FileLock(lock_path): - # If the download just completed while the lock was activated. - if os.path.exists(pointer_path) and not force_download: - # Even if returning early like here, the lock will be released. - return pointer_path - - if resume_download: - incomplete_path = blob_path + ".incomplete" - - @contextmanager - def _resumable_file_manager() -> Generator[io.BufferedWriter, None, None]: - with open(incomplete_path, "ab") as f: - yield f - - temp_file_manager = _resumable_file_manager - if os.path.exists(incomplete_path): - resume_size = os.stat(incomplete_path).st_size - else: - resume_size = 0 - else: - temp_file_manager = partial( # type: ignore - tempfile.NamedTemporaryFile, mode="wb", dir=cache_dir, delete=False - ) - resume_size = 0 - - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with temp_file_manager() as temp_file: - logger.info("downloading %s to %s", url, temp_file.name) - - http_get( - url_to_download, - temp_file, - proxies=proxies, - resume_size=resume_size, - headers=headers, - expected_size=expected_size, - ) - - if local_dir is None: - logger.info(f"Storing {url} in cache at {blob_path}") - _chmod_and_replace(temp_file.name, blob_path) - _create_symlink(blob_path, pointer_path, new_blob=True) - else: - local_dir_filepath = os.path.join(local_dir, relative_filename) - os.makedirs(os.path.dirname(local_dir_filepath), exist_ok=True) - - # If "auto" (default) copy-paste small files to ease manual editing but symlink big files to save disk - # In both cases, blob file is cached. - is_big_file = os.stat(temp_file.name).st_size > constants.HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD - if local_dir_use_symlinks is True or (local_dir_use_symlinks == "auto" and is_big_file): - logger.info(f"Storing {url} in cache at {blob_path}") - _chmod_and_replace(temp_file.name, blob_path) - logger.info("Create symlink to local dir") - _create_symlink(blob_path, local_dir_filepath, new_blob=False) - elif local_dir_use_symlinks == "auto" and not is_big_file: - logger.info(f"Storing {url} in cache at {blob_path}") - _chmod_and_replace(temp_file.name, blob_path) - logger.info("Duplicate in local dir (small file and use_symlink set to 'auto')") - shutil.copyfile(blob_path, local_dir_filepath) - else: - logger.info(f"Storing {url} in local_dir at {local_dir_filepath} (not cached).") - _chmod_and_replace(temp_file.name, local_dir_filepath) - pointer_path = local_dir_filepath # for return value - - try: - os.remove(lock_path) - except OSError: - pass - - return pointer_path - - -@validate_hf_hub_args -def try_to_load_from_cache( - repo_id: str, - filename: str, - cache_dir: Union[str, Path, None] = None, - revision: Optional[str] = None, - repo_type: Optional[str] = None, -) -> Union[str, _CACHED_NO_EXIST_T, None]: - """ - Explores the cache to return the latest cached file for a given revision if found. - - This function will not raise any exception if the file in not cached. - - Args: - cache_dir (`str` or `os.PathLike`): - The folder where the cached files lie. - repo_id (`str`): - The ID of the repo on huggingface.co. - filename (`str`): - The filename to look for inside `repo_id`. - revision (`str`, *optional*): - The specific model version to use. Will default to `"main"` if it's not provided and no `commit_hash` is - provided either. - repo_type (`str`, *optional*): - The type of the repository. Will default to `"model"`. - - Returns: - `Optional[str]` or `_CACHED_NO_EXIST`: - Will return `None` if the file was not cached. Otherwise: - - The exact path to the cached file if it's found in the cache - - A special value `_CACHED_NO_EXIST` if the file does not exist at the given commit hash and this fact was - cached. - - Example: - - ```python - from huggingface_hub import try_to_load_from_cache, _CACHED_NO_EXIST - - filepath = try_to_load_from_cache() - if isinstance(filepath, str): - # file exists and is cached - ... - elif filepath is _CACHED_NO_EXIST: - # non-existence of file is cached - ... - else: - # file is not cached - ... - ``` - """ - if revision is None: - revision = "main" - if repo_type is None: - repo_type = "model" - if repo_type not in REPO_TYPES: - raise ValueError(f"Invalid repo type: {repo_type}. Accepted repo types are: {str(REPO_TYPES)}") - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - - object_id = repo_id.replace("/", "--") - repo_cache = os.path.join(cache_dir, f"{repo_type}s--{object_id}") - if not os.path.isdir(repo_cache): - # No cache for this model - return None - - refs_dir = os.path.join(repo_cache, "refs") - snapshots_dir = os.path.join(repo_cache, "snapshots") - no_exist_dir = os.path.join(repo_cache, ".no_exist") - - # Resolve refs (for instance to convert main to the associated commit sha) - if os.path.isdir(refs_dir): - revision_file = os.path.join(refs_dir, revision) - if os.path.isfile(revision_file): - with open(revision_file) as f: - revision = f.read() - - # Check if file is cached as "no_exist" - if os.path.isfile(os.path.join(no_exist_dir, revision, filename)): - return _CACHED_NO_EXIST - - # Check if revision folder exists - if not os.path.exists(snapshots_dir): - return None - cached_shas = os.listdir(snapshots_dir) - if revision not in cached_shas: - # No cache for this revision and we won't try to return a random revision - return None - - # Check if file exists in cache - cached_file = os.path.join(snapshots_dir, revision, filename) - return cached_file if os.path.isfile(cached_file) else None - - -@validate_hf_hub_args -def get_hf_file_metadata( - url: str, - token: Union[bool, str, None] = None, - proxies: Optional[Dict] = None, - timeout: Optional[float] = 10.0, -) -> HfFileMetadata: - """Fetch metadata of a file versioned on the Hub for a given url. - - Args: - url (`str`): - File url, for example returned by [`hf_hub_url`]. - token (`str` or `bool`, *optional*): - A token to be used for the download. - - If `True`, the token is read from the HuggingFace config - folder. - - If `False` or `None`, no token is provided. - - If a string, it's used as the authentication token. - proxies (`dict`, *optional*): - Dictionary mapping protocol to the URL of the proxy passed to - `requests.request`. - timeout (`float`, *optional*, defaults to 10): - How many seconds to wait for the server to send metadata before giving up. - - Returns: - A [`HfFileMetadata`] object containing metadata such as location, etag, size and - commit_hash. - """ - headers = build_hf_headers(token=token) - headers["Accept-Encoding"] = "identity" # prevent any compression => we want to know the real size of the file - - # Retrieve metadata - r = _request_wrapper( - method="HEAD", - url=url, - headers=headers, - allow_redirects=False, - follow_relative_redirects=True, - proxies=proxies, - timeout=timeout, - ) - hf_raise_for_status(r) - - # Return - return HfFileMetadata( - commit_hash=r.headers.get(HUGGINGFACE_HEADER_X_REPO_COMMIT), - etag=_normalize_etag( - # We favor a custom header indicating the etag of the linked resource, and - # we fallback to the regular etag header. - r.headers.get(HUGGINGFACE_HEADER_X_LINKED_ETAG) - or r.headers.get("ETag") - ), - # Either from response headers (if redirected) or defaults to request url - # Do not use directly `url`, as `_request_wrapper` might have followed relative - # redirects. - location=r.headers.get("Location") or r.request.url, # type: ignore - size=_int_or_none(r.headers.get(HUGGINGFACE_HEADER_X_LINKED_SIZE) or r.headers.get("Content-Length")), - ) - - -def _int_or_none(value: Optional[str]) -> Optional[int]: - try: - return int(value) # type: ignore - except (TypeError, ValueError): - return None - - -def _chmod_and_replace(src: str, dst: str) -> None: - """Set correct permission before moving a blob from tmp directory to cache dir. - - Do not take into account the `umask` from the process as there is no convenient way - to get it that is thread-safe. - - See: - - About umask: https://docs.python.org/3/library/os.html#os.umask - - Thread-safety: https://stackoverflow.com/a/70343066 - - About solution: https://github.com/huggingface/huggingface_hub/pull/1220#issuecomment-1326211591 - - Fix issue: https://github.com/huggingface/huggingface_hub/issues/1141 - - Fix issue: https://github.com/huggingface/huggingface_hub/issues/1215 - """ - # Get umask by creating a temporary file in the cached repo folder. - tmp_file = Path(dst).parent.parent / f"tmp_{uuid.uuid4()}" - try: - tmp_file.touch() - cache_dir_mode = Path(tmp_file).stat().st_mode - os.chmod(src, stat.S_IMODE(cache_dir_mode)) - finally: - tmp_file.unlink() - - shutil.move(src, dst) - - -def _get_pointer_path(storage_folder: str, revision: str, relative_filename: str) -> str: - # Using `os.path.abspath` instead of `Path.resolve()` to avoid resolving symlinks - snapshot_path = os.path.join(storage_folder, "snapshots") - pointer_path = os.path.join(snapshot_path, revision, relative_filename) - if Path(os.path.abspath(snapshot_path)) not in Path(os.path.abspath(pointer_path)).parents: - raise ValueError( - "Invalid pointer path: cannot create pointer path in snapshot folder if" - f" `storage_folder='{storage_folder}'`, `revision='{revision}'` and" - f" `relative_filename='{relative_filename}'`." - ) - return pointer_path - - -def _to_local_dir( - path: str, local_dir: str, relative_filename: str, use_symlinks: Union[bool, Literal["auto"]] -) -> str: - """Place a file in a local dir (different than cache_dir). - - Either symlink to blob file in cache or duplicate file depending on `use_symlinks` and file size. - """ - # Using `os.path.abspath` instead of `Path.resolve()` to avoid resolving symlinks - local_dir_filepath = os.path.join(local_dir, relative_filename) - if Path(os.path.abspath(local_dir)) not in Path(os.path.abspath(local_dir_filepath)).parents: - raise ValueError( - f"Cannot copy file '{relative_filename}' to local dir '{local_dir}': file would not be in the local" - " directory." - ) - - os.makedirs(os.path.dirname(local_dir_filepath), exist_ok=True) - real_blob_path = os.path.realpath(path) - - # If "auto" (default) copy-paste small files to ease manual editing but symlink big files to save disk - if use_symlinks == "auto": - use_symlinks = os.stat(real_blob_path).st_size > constants.HF_HUB_LOCAL_DIR_AUTO_SYMLINK_THRESHOLD - - if use_symlinks: - _create_symlink(real_blob_path, local_dir_filepath, new_blob=False) - else: - shutil.copyfile(real_blob_path, local_dir_filepath) - return local_dir_filepath diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/README.md b/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/README.md deleted file mode 100644 index 6b25679efbe90d556244e7aa6bee3e863c28b069..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/intel_opts/README.md +++ /dev/null @@ -1,37 +0,0 @@ -## Diffusers examples with Intel optimizations - -**This research project is not actively maintained by the diffusers team. For any questions or comments, please make sure to tag @hshen14 .** - -This aims to provide diffusers examples with Intel optimizations such as Bfloat16 for training/fine-tuning acceleration and 8-bit integer (INT8) for inference acceleration on Intel platforms. - -## Accelerating the fine-tuning for textual inversion - -We accelereate the fine-tuning for textual inversion with Intel Extension for PyTorch. The [examples](textual_inversion) enable both single node and multi-node distributed training with Bfloat16 support on Intel Xeon Scalable Processor. - -## Accelerating the inference for Stable Diffusion using Bfloat16 - -We start the inference acceleration with Bfloat16 using Intel Extension for PyTorch. The [script](inference_bf16.py) is generally designed to support standard Stable Diffusion models with Bfloat16 support. -```bash -pip install diffusers transformers accelerate scipy safetensors - -export KMP_BLOCKTIME=1 -export KMP_SETTINGS=1 -export KMP_AFFINITY=granularity=fine,compact,1,0 - -# Intel OpenMP -export OMP_NUM_THREADS=< Cores to use > -export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libiomp5.so -# Jemalloc is a recommended malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. -export LD_PRELOAD=${LD_PRELOAD}:/path/to/lib/libjemalloc.so -export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:-1,muzzy_decay_ms:9000000000" - -# Launch with default DDIM -numactl --membind -C python python inference_bf16.py -# Launch with DPMSolverMultistepScheduler -numactl --membind -C python python inference_bf16.py --dpm - -``` - -## Accelerating the inference for Stable Diffusion using INT8 - -Coming soon ... diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py deleted file mode 100644 index 1ffceb61f8b1dc18e7878371b56d8425177abf43..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py +++ /dev/null @@ -1,1554 +0,0 @@ -from typing import Any, Dict, List, Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn as nn - -from ...configuration_utils import ConfigMixin, register_to_config -from ...models import ModelMixin -from ...models.attention import Attention -from ...models.attention_processor import AttentionProcessor, AttnAddedKVProcessor, AttnProcessor -from ...models.dual_transformer_2d import DualTransformer2DModel -from ...models.embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps -from ...models.transformer_2d import Transformer2DModel, Transformer2DModelOutput -from ...models.unet_2d_condition import UNet2DConditionOutput -from ...utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlockFlat": - return DownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlockFlat") - return CrossAttnDownBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{down_block_type} is not supported.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlockFlat": - return UpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlockFlat": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlockFlat") - return CrossAttnUpBlockFlat( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{up_block_type} is not supported.") - - -# Copied from diffusers.models.unet_2d_condition.UNet2DConditionModel with UNet2DConditionModel->UNetFlatConditionModel, nn.Conv2d->LinearMultiDim, Block2D->BlockFlat -class UNetFlatConditionModel(ModelMixin, ConfigMixin): - r""" - UNetFlatConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a - timestep and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the models (such as downloading or saving, etc.) - - Parameters: - sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): - Height and width of input/output sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "CrossAttnDownBlockFlat", "DownBlockFlat")`): - The tuple of downsample blocks to use. - mid_block_type (`str`, *optional*, defaults to `"UNetMidBlockFlatCrossAttn"`): - The mid block type. Choose from `UNetMidBlockFlatCrossAttn` or `UNetMidBlockFlatSimpleCrossAttn`, will skip - the mid block layer if `None`. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat", "CrossAttnUpBlockFlat",)`): - The tuple of upsample blocks to use. - only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`): - Whether to include self-attention in the basic transformer blocks, see - [`~models.attention.BasicTransformerBlock`]. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - If `None`, it will skip the normalization and activation layers in post-processing - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280): - The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config - for resnet blocks, see [`~models.resnet.ResnetBlockFlat`]. Choose from `default` or `scale_shift`. - class_embed_type (`str`, *optional*, defaults to None): - The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`, - `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`. - num_class_embeds (`int`, *optional*, defaults to None): - Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing - class conditioning with `class_embed_type` equal to `None`. - time_embedding_type (`str`, *optional*, default to `positional`): - The type of position embedding to use for timesteps. Choose from `positional` or `fourier`. - timestep_post_act (`str, *optional*, default to `None`): - The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`. - time_cond_proj_dim (`int`, *optional*, default to `None`): - The dimension of `cond_proj` layer in timestep embedding. - conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer. - conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer. - projection_class_embeddings_input_dim (`int`, *optional*): The dimension of the `class_labels` input when - using the "projection" `class_embed_type`. Required when using the "projection" `class_embed_type`. - class_embeddings_concat (`bool`, *optional*, defaults to `False`): Whether to concatenate the time - embeddings with the class embeddings. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "CrossAttnDownBlockFlat", - "DownBlockFlat", - ), - mid_block_type: Optional[str] = "UNetMidBlockFlatCrossAttn", - up_block_types: Tuple[str] = ( - "UpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - "CrossAttnUpBlockFlat", - ), - only_cross_attention: Union[bool, Tuple[bool]] = False, - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: Optional[int] = 32, - norm_eps: float = 1e-5, - cross_attention_dim: Union[int, Tuple[int]] = 1280, - attention_head_dim: Union[int, Tuple[int]] = 8, - dual_cross_attention: bool = False, - use_linear_projection: bool = False, - class_embed_type: Optional[str] = None, - num_class_embeds: Optional[int] = None, - upcast_attention: bool = False, - resnet_time_scale_shift: str = "default", - time_embedding_type: str = "positional", - timestep_post_act: Optional[str] = None, - time_cond_proj_dim: Optional[int] = None, - conv_in_kernel: int = 3, - conv_out_kernel: int = 3, - projection_class_embeddings_input_dim: Optional[int] = None, - class_embeddings_concat: bool = False, - ): - super().__init__() - - self.sample_size = sample_size - - # Check inputs - if len(down_block_types) != len(up_block_types): - raise ValueError( - "Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`:" - f" {down_block_types}. `up_block_types`: {up_block_types}." - ) - - if len(block_out_channels) != len(down_block_types): - raise ValueError( - "Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`:" - f" {block_out_channels}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(only_cross_attention, bool) and len(only_cross_attention) != len(down_block_types): - raise ValueError( - "Must provide the same number of `only_cross_attention` as `down_block_types`." - f" `only_cross_attention`: {only_cross_attention}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types): - raise ValueError( - "Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`:" - f" {attention_head_dim}. `down_block_types`: {down_block_types}." - ) - - if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types): - raise ValueError( - "Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`:" - f" {cross_attention_dim}. `down_block_types`: {down_block_types}." - ) - - # input - conv_in_padding = (conv_in_kernel - 1) // 2 - self.conv_in = LinearMultiDim( - in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding - ) - - # time - if time_embedding_type == "fourier": - time_embed_dim = block_out_channels[0] * 2 - if time_embed_dim % 2 != 0: - raise ValueError(f"`time_embed_dim` should be divisible by 2, but is {time_embed_dim}.") - self.time_proj = GaussianFourierProjection( - time_embed_dim // 2, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos - ) - timestep_input_dim = time_embed_dim - elif time_embedding_type == "positional": - time_embed_dim = block_out_channels[0] * 4 - - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - else: - raise ValueError( - f"{time_embedding_type} does not exist. Please make sure to use one of `fourier` or `positional`." - ) - - self.time_embedding = TimestepEmbedding( - timestep_input_dim, - time_embed_dim, - act_fn=act_fn, - post_act_fn=timestep_post_act, - cond_proj_dim=time_cond_proj_dim, - ) - - # class embedding - if class_embed_type is None and num_class_embeds is not None: - self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) - elif class_embed_type == "timestep": - self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - elif class_embed_type == "identity": - self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim) - elif class_embed_type == "projection": - if projection_class_embeddings_input_dim is None: - raise ValueError( - "`class_embed_type`: 'projection' requires `projection_class_embeddings_input_dim` be set" - ) - # The projection `class_embed_type` is the same as the timestep `class_embed_type` except - # 1. the `class_labels` inputs are not first converted to sinusoidal embeddings - # 2. it projects from an arbitrary input dimension. - # - # Note that `TimestepEmbedding` is quite general, being mainly linear layers and activations. - # When used for embedding actual timesteps, the timesteps are first converted to sinusoidal embeddings. - # As a result, `TimestepEmbedding` can be passed arbitrary vectors. - self.class_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) - elif class_embed_type == "simple_projection": - if projection_class_embeddings_input_dim is None: - raise ValueError( - "`class_embed_type`: 'simple_projection' requires `projection_class_embeddings_input_dim` be set" - ) - self.class_embedding = nn.Linear(projection_class_embeddings_input_dim, time_embed_dim) - else: - self.class_embedding = None - - self.down_blocks = nn.ModuleList([]) - self.up_blocks = nn.ModuleList([]) - - if isinstance(only_cross_attention, bool): - only_cross_attention = [only_cross_attention] * len(down_block_types) - - if isinstance(attention_head_dim, int): - attention_head_dim = (attention_head_dim,) * len(down_block_types) - - if isinstance(cross_attention_dim, int): - cross_attention_dim = (cross_attention_dim,) * len(down_block_types) - - if class_embeddings_concat: - # The time embeddings are concatenated with the class embeddings. The dimension of the - # time embeddings passed to the down, middle, and up blocks is twice the dimension of the - # regular time embeddings - blocks_time_embed_dim = time_embed_dim * 2 - else: - blocks_time_embed_dim = time_embed_dim - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=blocks_time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim[i], - attn_num_head_channels=attention_head_dim[i], - downsample_padding=downsample_padding, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - self.down_blocks.append(down_block) - - # mid - if mid_block_type == "UNetMidBlockFlatCrossAttn": - self.mid_block = UNetMidBlockFlatCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=blocks_time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift=resnet_time_scale_shift, - cross_attention_dim=cross_attention_dim[-1], - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - elif mid_block_type == "UNetMidBlockFlatSimpleCrossAttn": - self.mid_block = UNetMidBlockFlatSimpleCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=blocks_time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - cross_attention_dim=cross_attention_dim[-1], - attn_num_head_channels=attention_head_dim[-1], - resnet_groups=norm_num_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif mid_block_type is None: - self.mid_block = None - else: - raise ValueError(f"unknown mid_block_type : {mid_block_type}") - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - reversed_attention_head_dim = list(reversed(attention_head_dim)) - reversed_cross_attention_dim = list(reversed(cross_attention_dim)) - only_cross_attention = list(reversed(only_cross_attention)) - - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=blocks_time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=reversed_cross_attention_dim[i], - attn_num_head_channels=reversed_attention_head_dim[i], - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - if norm_num_groups is not None: - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps - ) - self.conv_act = nn.SiLU() - else: - self.conv_norm_out = None - self.conv_act = None - - conv_out_padding = (conv_out_kernel - 1) // 2 - self.conv_out = LinearMultiDim( - block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding - ) - - @property - def attn_processors(self) -> Dict[str, AttentionProcessor]: - r""" - Returns: - `dict` of attention processors: A dictionary containing all attention processors used in the model with - indexed by its weight name. - """ - # set recursively - processors = {} - - def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): - if hasattr(module, "set_processor"): - processors[f"{name}.processor"] = module.processor - - for sub_name, child in module.named_children(): - fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) - - return processors - - for name, module in self.named_children(): - fn_recursive_add_processors(name, module, processors) - - return processors - - def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): - r""" - Parameters: - `processor (`dict` of `AttentionProcessor` or `AttentionProcessor`): - The instantiated processor class or a dictionary of processor classes that will be set as the processor - of **all** `Attention` layers. - In case `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.: - - """ - count = len(self.attn_processors.keys()) - - if isinstance(processor, dict) and len(processor) != count: - raise ValueError( - f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" - f" number of attention layers: {count}. Please make sure to pass {count} processor classes." - ) - - def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): - if hasattr(module, "set_processor"): - if not isinstance(processor, dict): - module.set_processor(processor) - else: - module.set_processor(processor.pop(f"{name}.processor")) - - for sub_name, child in module.named_children(): - fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) - - for name, module in self.named_children(): - fn_recursive_attn_processor(name, module, processor) - - def set_default_attn_processor(self): - """ - Disables custom attention processors and sets the default attention implementation. - """ - self.set_attn_processor(AttnProcessor()) - - def set_attention_slice(self, slice_size): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is - provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` - must be a multiple of `slice_size`. - """ - sliceable_head_dims = [] - - def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module): - if hasattr(module, "set_attention_slice"): - sliceable_head_dims.append(module.sliceable_head_dim) - - for child in module.children(): - fn_recursive_retrieve_sliceable_dims(child) - - # retrieve number of attention layers - for module in self.children(): - fn_recursive_retrieve_sliceable_dims(module) - - num_sliceable_layers = len(sliceable_head_dims) - - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = [dim // 2 for dim in sliceable_head_dims] - elif slice_size == "max": - # make smallest slice possible - slice_size = num_sliceable_layers * [1] - - slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size - - if len(slice_size) != len(sliceable_head_dims): - raise ValueError( - f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different" - f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}." - ) - - for i in range(len(slice_size)): - size = slice_size[i] - dim = sliceable_head_dims[i] - if size is not None and size > dim: - raise ValueError(f"size {size} has to be smaller or equal to {dim}.") - - # Recursively walk through all the children. - # Any children which exposes the set_attention_slice method - # gets the message - def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]): - if hasattr(module, "set_attention_slice"): - module.set_attention_slice(slice_size.pop()) - - for child in module.children(): - fn_recursive_set_attention_slice(child, slice_size) - - reversed_slice_size = list(reversed(slice_size)) - for module in self.children(): - fn_recursive_set_attention_slice(module, reversed_slice_size) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, (CrossAttnDownBlockFlat, DownBlockFlat, CrossAttnUpBlockFlat, UpBlockFlat)): - module.gradient_checkpointing = value - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - class_labels: Optional[torch.Tensor] = None, - timestep_cond: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None, - mid_block_additional_residual: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - Args: - sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor - timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps - encoder_hidden_states (`torch.FloatTensor`): (batch, sequence_length, feature_dim) encoder hidden states - encoder_attention_mask (`torch.Tensor`): - (batch, sequence_length) cross-attention mask (or bias), applied to encoder_hidden_states. If a - BoolTensor is provided, it will be turned into a bias, by adding a large negative value. False = hide - token. Other tensor types will be used as-is as bias values. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layers). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - logger.info("Forward upsample size to force interpolation output size.") - forward_upsample_size = True - - # prepare attention_mask - if attention_mask is not None: - attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # ensure encoder_attention_mask is a bias, and make it broadcastable over multi-head-attention channels - if encoder_attention_mask is not None: - # if it's a mask: turn it into a bias. otherwise: assume it's already a bias - if encoder_attention_mask.dtype is torch.bool: - encoder_attention_mask = (1 - encoder_attention_mask.to(sample.dtype)) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - # This would be a good case for the `match` statement (Python 3.10+) - is_mps = sample.device.type == "mps" - if isinstance(timestep, float): - dtype = torch.float32 if is_mps else torch.float64 - else: - dtype = torch.int32 if is_mps else torch.int64 - timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) - elif len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - - # timesteps does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.to(dtype=self.dtype) - - emb = self.time_embedding(t_emb, timestep_cond) - - if self.class_embedding is not None: - if class_labels is None: - raise ValueError("class_labels should be provided when num_class_embeds > 0") - - if self.config.class_embed_type == "timestep": - class_labels = self.time_proj(class_labels) - - class_emb = self.class_embedding(class_labels).to(dtype=self.dtype) - - if self.config.class_embeddings_concat: - emb = torch.cat([emb, class_emb], dim=-1) - else: - emb = emb + class_emb - - # 2. pre-process - sample = self.conv_in(sample) - - # 3. down - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: - sample, res_samples = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - encoder_attention_mask=encoder_attention_mask, - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - if down_block_additional_residuals is not None: - new_down_block_res_samples = () - - for down_block_res_sample, down_block_additional_residual in zip( - down_block_res_samples, down_block_additional_residuals - ): - down_block_res_sample = down_block_res_sample + down_block_additional_residual - new_down_block_res_samples += (down_block_res_sample,) - - down_block_res_samples = new_down_block_res_samples - - # 4. mid - if self.mid_block is not None: - sample = self.mid_block( - sample, - emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - encoder_attention_mask=encoder_attention_mask, - ) - - if mid_block_additional_residual is not None: - sample = sample + mid_block_additional_residual - - # 5. up - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - upsample_size=upsample_size, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ) - else: - sample = upsample_block( - hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size - ) - - # 6. post-process - if self.conv_norm_out: - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) - - -class LinearMultiDim(nn.Linear): - def __init__(self, in_features, out_features=None, second_dim=4, *args, **kwargs): - in_features = [in_features, second_dim, 1] if isinstance(in_features, int) else list(in_features) - if out_features is None: - out_features = in_features - out_features = [out_features, second_dim, 1] if isinstance(out_features, int) else list(out_features) - self.in_features_multidim = in_features - self.out_features_multidim = out_features - super().__init__(np.array(in_features).prod(), np.array(out_features).prod()) - - def forward(self, input_tensor, *args, **kwargs): - shape = input_tensor.shape - n_dim = len(self.in_features_multidim) - input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_features) - output_tensor = super().forward(input_tensor) - output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_features_multidim) - return output_tensor - - -class ResnetBlockFlat(nn.Module): - def __init__( - self, - *, - in_channels, - out_channels=None, - dropout=0.0, - temb_channels=512, - groups=32, - groups_out=None, - pre_norm=True, - eps=1e-6, - time_embedding_norm="default", - use_in_shortcut=None, - second_dim=4, - **kwargs, - ): - super().__init__() - self.pre_norm = pre_norm - self.pre_norm = True - - in_channels = [in_channels, second_dim, 1] if isinstance(in_channels, int) else list(in_channels) - self.in_channels_prod = np.array(in_channels).prod() - self.channels_multidim = in_channels - - if out_channels is not None: - out_channels = [out_channels, second_dim, 1] if isinstance(out_channels, int) else list(out_channels) - out_channels_prod = np.array(out_channels).prod() - self.out_channels_multidim = out_channels - else: - out_channels_prod = self.in_channels_prod - self.out_channels_multidim = self.channels_multidim - self.time_embedding_norm = time_embedding_norm - - if groups_out is None: - groups_out = groups - - self.norm1 = torch.nn.GroupNorm(num_groups=groups, num_channels=self.in_channels_prod, eps=eps, affine=True) - self.conv1 = torch.nn.Conv2d(self.in_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - if temb_channels is not None: - self.time_emb_proj = torch.nn.Linear(temb_channels, out_channels_prod) - else: - self.time_emb_proj = None - - self.norm2 = torch.nn.GroupNorm(num_groups=groups_out, num_channels=out_channels_prod, eps=eps, affine=True) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels_prod, out_channels_prod, kernel_size=1, padding=0) - - self.nonlinearity = nn.SiLU() - - self.use_in_shortcut = ( - self.in_channels_prod != out_channels_prod if use_in_shortcut is None else use_in_shortcut - ) - - self.conv_shortcut = None - if self.use_in_shortcut: - self.conv_shortcut = torch.nn.Conv2d( - self.in_channels_prod, out_channels_prod, kernel_size=1, stride=1, padding=0 - ) - - def forward(self, input_tensor, temb): - shape = input_tensor.shape - n_dim = len(self.channels_multidim) - input_tensor = input_tensor.reshape(*shape[0:-n_dim], self.in_channels_prod, 1, 1) - input_tensor = input_tensor.view(-1, self.in_channels_prod, 1, 1) - - hidden_states = input_tensor - - hidden_states = self.norm1(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - hidden_states = self.conv1(hidden_states) - - if temb is not None: - temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None] - hidden_states = hidden_states + temb - - hidden_states = self.norm2(hidden_states) - hidden_states = self.nonlinearity(hidden_states) - - hidden_states = self.dropout(hidden_states) - hidden_states = self.conv2(hidden_states) - - if self.conv_shortcut is not None: - input_tensor = self.conv_shortcut(input_tensor) - - output_tensor = input_tensor + hidden_states - - output_tensor = output_tensor.view(*shape[0:-n_dim], -1) - output_tensor = output_tensor.view(*shape[0:-n_dim], *self.out_channels_multidim) - - return output_tensor - - -# Copied from diffusers.models.unet_2d_blocks.DownBlock2D with DownBlock2D->DownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class DownBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnDownBlock2D with CrossAttnDownBlock2D->CrossAttnDownBlockFlat, ResnetBlock2D->ResnetBlockFlat, Downsample2D->LinearMultiDim -class CrossAttnDownBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - LinearMultiDim( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - None, # timestep - None, # class_labels - cross_attention_kwargs, - attention_mask, - encoder_attention_mask, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ).sample - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -# Copied from diffusers.models.unet_2d_blocks.UpBlock2D with UpBlock2D->UpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class UpBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.CrossAttnUpBlock2D with CrossAttnUpBlock2D->CrossAttnUpBlockFlat, ResnetBlock2D->ResnetBlockFlat, Upsample2D->LinearMultiDim -class CrossAttnUpBlockFlat(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlockFlat( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([LinearMultiDim(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.FloatTensor, - res_hidden_states_tuple: Tuple[torch.FloatTensor, ...], - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - upsample_size: Optional[int] = None, - attention_mask: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - None, # timestep - None, # class_labels - cross_attention_kwargs, - attention_mask, - encoder_attention_mask, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DCrossAttn with UNetMidBlock2DCrossAttn->UNetMidBlockFlatCrossAttn, ResnetBlock2D->ResnetBlockFlat -class UNetMidBlockFlatCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if not dual_cross_attention: - attentions.append( - Transformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - else: - attentions.append( - DualTransformer2DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward( - self, - hidden_states: torch.FloatTensor, - temb: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.FloatTensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - ) -> torch.FloatTensor: - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - output: Transformer2DModelOutput = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ) - hidden_states = output.sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -# Copied from diffusers.models.unet_2d_blocks.UNetMidBlock2DSimpleCrossAttn with UNetMidBlock2DSimpleCrossAttn->UNetMidBlockFlatSimpleCrossAttn, ResnetBlock2D->ResnetBlockFlat -class UNetMidBlockFlatSimpleCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - ): - super().__init__() - - self.has_cross_attention = True - - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - self.num_heads = in_channels // self.attn_num_head_channels - - # there is always at least one resnet - resnets = [ - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - Attention( - query_dim=in_channels, - cross_attention_dim=in_channels, - heads=self.num_heads, - dim_head=attn_num_head_channels, - added_kv_proj_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - bias=True, - upcast_softmax=True, - processor=AttnAddedKVProcessor(), - ) - ) - resnets.append( - ResnetBlockFlat( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward( - self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None - ): - cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {} - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - # attn - hidden_states = attn( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - **cross_attention_kwargs, - ) - - # resnet - hidden_states = resnet(hidden_states, temb) - - return hidden_states diff --git a/spaces/deelerb/3dselfie/PIFu/lib/renderer/gl/__init__.py b/spaces/deelerb/3dselfie/PIFu/lib/renderer/gl/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine.py deleted file mode 100644 index 25bce124aff5a44b37e2109634a1169edd1fe3f6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/2 17:46 -@Author : alexanderwu -@File : test_search_engine.py -""" -from __future__ import annotations - -import pytest - -from metagpt.logs import logger -from metagpt.tools import SearchEngineType -from metagpt.tools.search_engine import SearchEngine - - -class MockSearchEnine: - async def run(self, query: str, max_results: int = 8, as_string: bool = True) -> str | list[dict[str, str]]: - rets = [ - {"url": "https://metagpt.com/mock/{i}", "title": query, "snippet": query * i} for i in range(max_results) - ] - return "\n".join(rets) if as_string else rets - - -@pytest.mark.asyncio -@pytest.mark.parametrize( - ("search_engine_typpe", "run_func", "max_results", "as_string"), - [ - (SearchEngineType.SERPAPI_GOOGLE, None, 8, True), - (SearchEngineType.SERPAPI_GOOGLE, None, 4, False), - (SearchEngineType.DIRECT_GOOGLE, None, 8, True), - (SearchEngineType.DIRECT_GOOGLE, None, 6, False), - (SearchEngineType.SERPER_GOOGLE, None, 8, True), - (SearchEngineType.SERPER_GOOGLE, None, 6, False), - (SearchEngineType.DUCK_DUCK_GO, None, 8, True), - (SearchEngineType.DUCK_DUCK_GO, None, 6, False), - (SearchEngineType.CUSTOM_ENGINE, MockSearchEnine().run, 8, False), - (SearchEngineType.CUSTOM_ENGINE, MockSearchEnine().run, 6, False), - ], -) -async def test_search_engine( - search_engine_typpe, - run_func, - max_results, - as_string, -): - search_engine = SearchEngine(search_engine_typpe, run_func) - rsp = await search_engine.run("metagpt", max_results=max_results, as_string=as_string) - logger.info(rsp) - if as_string: - assert isinstance(rsp, str) - else: - assert isinstance(rsp, list) - assert len(rsp) == max_results diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/cloning/clonevoice.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/cloning/clonevoice.py deleted file mode 100644 index a59b0fc561040572400af2771cac8dac75e8d13f..0000000000000000000000000000000000000000 --- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/cloning/clonevoice.py +++ /dev/null @@ -1,68 +0,0 @@ -from bark.generation import load_codec_model, generate_text_semantic, grab_best_device -from encodec.utils import convert_audio -from bark.hubert.hubert_manager import HuBERTManager -from bark.hubert.pre_kmeans_hubert import CustomHubert -from bark.hubert.customtokenizer import CustomTokenizer - -import torchaudio -import torch -import os -import gradio - - -def clone_voice(audio_filepath, dest_filename, progress=gradio.Progress(track_tqdm=True)): - # if len(text) < 1: - # raise gradio.Error('No transcription text entered!') - - use_gpu = False # not os.environ.get("BARK_FORCE_CPU", False) - progress(0, desc="Loading Codec") - model = load_codec_model(use_gpu=use_gpu) - - # From https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer - hubert_manager = HuBERTManager() - hubert_manager.make_sure_hubert_installed() - hubert_manager.make_sure_tokenizer_installed() - - # From https://github.com/gitmylo/bark-voice-cloning-HuBERT-quantizer - # Load HuBERT for semantic tokens - - # Load the HuBERT model - device = grab_best_device(use_gpu) - hubert_model = CustomHubert(checkpoint_path='./models/hubert/hubert.pt').to(device) - - # Load the CustomTokenizer model - tokenizer = CustomTokenizer.load_from_checkpoint('./models/hubert/en_tokenizer.pth').to(device) # change to the correct path - - progress(0.25, desc="Converting WAV") - - # Load and pre-process the audio waveform - wav, sr = torchaudio.load(audio_filepath) - if wav.shape[0] == 2: # Stereo to mono if needed - wav = wav.mean(0, keepdim=True) - - wav = convert_audio(wav, sr, model.sample_rate, model.channels) - wav = wav.to(device) - progress(0.5, desc="Extracting codes") - - semantic_vectors = hubert_model.forward(wav, input_sample_hz=model.sample_rate) - semantic_tokens = tokenizer.get_token(semantic_vectors) - - # Extract discrete codes from EnCodec - with torch.no_grad(): - encoded_frames = model.encode(wav.unsqueeze(0)) - codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1).squeeze() # [n_q, T] - - # get seconds of audio - # seconds = wav.shape[-1] / model.sample_rate - # generate semantic tokens - # semantic_tokens = generate_text_semantic(text, max_gen_duration_s=seconds, top_k=50, top_p=.95, temp=0.7) - - # move codes to cpu - codes = codes.cpu().numpy() - # move semantic tokens to cpu - semantic_tokens = semantic_tokens.cpu().numpy() - - import numpy as np - output_path = dest_filename + '.npz' - np.savez(output_path, fine_prompt=codes, coarse_prompt=codes[:2, :], semantic_prompt=semantic_tokens) - return ["Finished", output_path] \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/IKMultimediaKeyGenSerialKeykeygen.md b/spaces/diacanFperku/AutoGPT/IKMultimediaKeyGenSerialKeykeygen.md deleted file mode 100644 index 98828a3c62cd8c3fe980aeadf9f676ac621fd9a6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/IKMultimediaKeyGenSerialKeykeygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

        IKMultimediaKeyGenSerialKeykeygen


        Download Filehttps://gohhs.com/2uFTIc



        -
        -Amplitube 4.9.1 Crack Plus Keygen with Serial Key Torrent is a guitar and effects modeling. This software is manufactured by IK Multimedia. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/Raily 4 Se Keygen _VERIFIED_ Download.md b/spaces/diacanFperku/AutoGPT/Raily 4 Se Keygen _VERIFIED_ Download.md deleted file mode 100644 index 542c08c822ad603f2a3d803de658994b4f28bffa..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Raily 4 Se Keygen _VERIFIED_ Download.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        A maximum of eight did not register. Housing support is limited. The group was replaced. This is a measure of size. First brought to prominence in the s. Reinforcement was negligible. The next effort was a bust. The one-upmanship of the leading light. Animals that eat the roots. This principle is commonly violated. This may be a single play. Usual candidates for this problem. Children had no access to physicians. Nothing could be said for certainty. Academic achievement has become. Product design is a broad spectrum. The act became obsolete. Download crack 1. The milepost is located on i. Outdoor recreational education is enhanced. Several previous records were set. This is inappropriate for academic use. This is a work by some at least. It was published in b. This was a major contribution. Stamps of florida are today. The format is not of print. The system is called a. A rise in prices is predicted. The original version of the book. This is done in each class. It is a channeling process. This is a crack 1. The four dihydroxybenzene rings are planar. This has been sold. The amount varies with size. This was the only competitive design. N and n are integers. The amount is set by the user. This is regulated by training teachers. The song was a minor hit. The second version was smaller. This is a popular means of transportation. The river empties into the okeechobee. Some users will feel right at home. The process is both selective and nonselective. The four dihydroxybenzene rings are planar. This is the schoolboy's equivalent. The novel was met with sustained interest. The first was an attempt to clean up the act. The game has since gained worldwide exposure. All new cars sold in the u. The university also has a museum. The series began with the first. S then becomes a pure proton with a mass. The group was later broken up. Currents are lessened during midwinter. This is the association's principal site. Primary colors are more common. This is the official biography. Their severity varies from one user to another. The second volume is. The first step is to visualize. A large part of the installation cost. This process is similar to downloading keygen or crack. The vote will not be delayed. The first contains the words. The last and final segment. Supposedly an act of war. He turned to the pillar. The first and most famous were the.AC Milan have won a staggering 65% of their points this season when away from home, according to data compiled by FootballWhispers. Milan are no strangers to success on the road, having won 11 of their previous 17 away from home, including last season's 2-1 victory over Inter at the San Siro. With 32 points from 12 games, Milan sit seventh in the league table while Inter, like many of the other sides to win a large proportion of points on the road, are one point behind in eighth. Tomaso Mazzotta compares Milan's Serie A defence with that of Juventus' Milan defender Milan Skriniar (top left) and Cesena's Simone Verdi (bottom left) AC Milan's 18 away wins are an exceptional return but many teams in Serie A, particularly the big clubs, would be happy with those numbers. However, there are a number of notable facts regarding the defending champions this season that highlight why they are the Serie A team to beat when away. Ten times this season the Rossoneri have won points away from home.

        -

        raily 4 se keygen download


        DOWNLOAD --->>> https://gohhs.com/2uFUOe



        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Release Code Circuit Wizardl HOT!.md b/spaces/diacanFperku/AutoGPT/Release Code Circuit Wizardl HOT!.md deleted file mode 100644 index 76a361c11fadbbb1a13850511aab7ea1fdeafc6d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Release Code Circuit Wizardl HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Release Code Circuit Wizardl


        Download Ziphttps://gohhs.com/2uFUwz



        - -Area Code 401-461-3450. 620 South Pleasantburg ... and RELAY TESTING. CIRCUIT BREAKER LOAD TESTING ... majorin Economicstobe a Financial Wizardl. 1fdad05405
        -
        -
        -

        diff --git a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/monotonic_align/core.py b/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese_bert.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Taffy-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/assigners/__init__.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/assigners/__init__.py deleted file mode 100644 index 95e34a848652f2ab3ca6d3489aa2934d24817888..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .max_iou_assigner import MaxIoUAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner' -] diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/__init__.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/__init__.py deleted file mode 100644 index 0b06303fe1000e11c5486c40c70606a34a5208e3..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/samplers/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .base_sampler import BaseSampler -from .combined_sampler import CombinedSampler -from .instance_balanced_pos_sampler import InstanceBalancedPosSampler -from .iou_balanced_neg_sampler import IoUBalancedNegSampler -from .ohem_sampler import OHEMSampler -from .pseudo_sampler import PseudoSampler -from .random_sampler import RandomSampler -from .sampling_result import SamplingResult -from .score_hlr_sampler import ScoreHLRSampler - -__all__ = [ - 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler' -] diff --git a/spaces/dineshreddy/WALT/mmdet/utils/optimizer.py b/spaces/dineshreddy/WALT/mmdet/utils/optimizer.py deleted file mode 100644 index 9c9d11941c0b43d42bd6daad1e4b927eaca3e675..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/utils/optimizer.py +++ /dev/null @@ -1,33 +0,0 @@ -from mmcv.runner import OptimizerHook, HOOKS -try: - import apex -except: - print('apex is not installed') - - -@HOOKS.register_module() -class DistOptimizerHook(OptimizerHook): - """Optimizer hook for distributed training.""" - - def __init__(self, update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=-1, use_fp16=False): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.update_interval = update_interval - self.use_fp16 = use_fp16 - - def before_run(self, runner): - runner.optimizer.zero_grad() - - def after_train_iter(self, runner): - runner.outputs['loss'] /= self.update_interval - if self.use_fp16: - with apex.amp.scale_loss(runner.outputs['loss'], runner.optimizer) as scaled_loss: - scaled_loss.backward() - else: - runner.outputs['loss'].backward() - if self.every_n_iters(runner, self.update_interval): - if self.grad_clip is not None: - self.clip_grads(runner.model.parameters()) - runner.optimizer.step() - runner.optimizer.zero_grad() diff --git a/spaces/dineshreddy/WALT/walt/datasets/pipelines/compose.py b/spaces/dineshreddy/WALT/walt/datasets/pipelines/compose.py deleted file mode 100644 index f7a269832bd13983197daf1001b397a9c416c762..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/walt/datasets/pipelines/compose.py +++ /dev/null @@ -1,52 +0,0 @@ -import collections - -from mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - #print(data) - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/dorkai/text-generation-webui-main/modules/logging_colors.py b/spaces/dorkai/text-generation-webui-main/modules/logging_colors.py deleted file mode 100644 index 5c9714f7cd08f88f30335dfc0b7a694879414a68..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/modules/logging_colors.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copied from https://stackoverflow.com/a/1336640 - -import logging -import platform - - -def add_coloring_to_emit_windows(fn): - # add methods we need to the class - def _out_handle(self): - import ctypes - return ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE) - out_handle = property(_out_handle) - - def _set_color(self, code): - import ctypes - - # Constants from the Windows API - self.STD_OUTPUT_HANDLE = -11 - hdl = ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE) - ctypes.windll.kernel32.SetConsoleTextAttribute(hdl, code) - - setattr(logging.StreamHandler, '_set_color', _set_color) - - def new(*args): - FOREGROUND_BLUE = 0x0001 # text color contains blue. - FOREGROUND_GREEN = 0x0002 # text color contains green. - FOREGROUND_RED = 0x0004 # text color contains red. - FOREGROUND_INTENSITY = 0x0008 # text color is intensified. - FOREGROUND_WHITE = FOREGROUND_BLUE | FOREGROUND_GREEN | FOREGROUND_RED - # winbase.h - # STD_INPUT_HANDLE = -10 - # STD_OUTPUT_HANDLE = -11 - # STD_ERROR_HANDLE = -12 - - # wincon.h - # FOREGROUND_BLACK = 0x0000 - FOREGROUND_BLUE = 0x0001 - FOREGROUND_GREEN = 0x0002 - # FOREGROUND_CYAN = 0x0003 - FOREGROUND_RED = 0x0004 - FOREGROUND_MAGENTA = 0x0005 - FOREGROUND_YELLOW = 0x0006 - # FOREGROUND_GREY = 0x0007 - FOREGROUND_INTENSITY = 0x0008 # foreground color is intensified. - - # BACKGROUND_BLACK = 0x0000 - # BACKGROUND_BLUE = 0x0010 - # BACKGROUND_GREEN = 0x0020 - # BACKGROUND_CYAN = 0x0030 - # BACKGROUND_RED = 0x0040 - # BACKGROUND_MAGENTA = 0x0050 - BACKGROUND_YELLOW = 0x0060 - # BACKGROUND_GREY = 0x0070 - BACKGROUND_INTENSITY = 0x0080 # background color is intensified. - - levelno = args[1].levelno - if (levelno >= 50): - color = BACKGROUND_YELLOW | FOREGROUND_RED | FOREGROUND_INTENSITY | BACKGROUND_INTENSITY - elif (levelno >= 40): - color = FOREGROUND_RED | FOREGROUND_INTENSITY - elif (levelno >= 30): - color = FOREGROUND_YELLOW | FOREGROUND_INTENSITY - elif (levelno >= 20): - color = FOREGROUND_GREEN - elif (levelno >= 10): - color = FOREGROUND_MAGENTA - else: - color = FOREGROUND_WHITE - args[0]._set_color(color) - - ret = fn(*args) - args[0]._set_color(FOREGROUND_WHITE) - # print "after" - return ret - return new - - -def add_coloring_to_emit_ansi(fn): - # add methods we need to the class - def new(*args): - levelno = args[1].levelno - if (levelno >= 50): - color = '\x1b[31m' # red - elif (levelno >= 40): - color = '\x1b[31m' # red - elif (levelno >= 30): - color = '\x1b[33m' # yellow - elif (levelno >= 20): - color = '\x1b[32m' # green - elif (levelno >= 10): - color = '\x1b[35m' # pink - else: - color = '\x1b[0m' # normal - args[1].msg = color + args[1].msg + '\x1b[0m' # normal - # print "after" - return fn(*args) - return new - - -if platform.system() == 'Windows': - # Windows does not support ANSI escapes and we are using API calls to set the console color - logging.StreamHandler.emit = add_coloring_to_emit_windows(logging.StreamHandler.emit) -else: - # all non-Windows platforms are supporting ANSI escapes so we use them - logging.StreamHandler.emit = add_coloring_to_emit_ansi(logging.StreamHandler.emit) - # log = logging.getLogger() - # log.addFilter(log_filter()) - # //hdlr = logging.StreamHandler() - # //hdlr.setFormatter(formatter()) diff --git a/spaces/dragao-elastico/RVC_V2/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/dragao-elastico/RVC_V2/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dsaigc/trans_for_sd/app.py b/spaces/dsaigc/trans_for_sd/app.py deleted file mode 100644 index 7bca9c6b099596f57f54ee15d8ff6fa80e7881ae..0000000000000000000000000000000000000000 --- a/spaces/dsaigc/trans_for_sd/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import openai -import os - - -import gradio as gr -import openai -import backoff # for exponential backoff -from reportlab.lib.pagesizes import letter -from reportlab.lib import colors -from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, Table, TableStyle -from reportlab.lib.styles import getSampleStyleSheet -from reportlab.lib.enums import TA_CENTER - -openai.api_key = os.environ['chat_key'] - - -def trans_cn(user_message, model="gpt-3.5-turbo", max_tokens=4096): - # 生成聊天回复 - response = openai.ChatCompletion.create( - model= model, - messages=[{"role": "system", "content": "Translate my input into English language please Here are some requests: 1, all sentences start with : a storyboard panel dpicting.2, Remember these key words correspondences, 远景:cinematic-scene-long_shot, 中景:cinematic-scene-mid_shot,近景:cinematic-scene-close_up_shot.3,Remenber thes key words correspondences, 仰视:worm's eye view."}] + - [{"role": "user", "content": user_message}], - # system 中 定义回答问题的具体类型等 - temperature=0.5, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=["\n\n"], - ) - assistant_response = response.choices[0].message.content - - # 将助手回复添加到对应的会话中 - return assistant_response - -input_cn = gr.inputs.Textbox(lines=5, placeholder="请输入你的中文提示语") -output_cn = gr.outputs.Textbox() -iface_cn = gr.Interface(fn=trans_cn, inputs=input_cn, outputs=output_cn, title="提示词", description="sd提示词中文转英文") - -def trans_en(user_message, model="gpt-3.5-turbo", max_tokens=4096): - # 生成聊天回复 - response = openai.ChatCompletion.create( - model= model, - messages=[{"role": "system", "content": "translate my input into prompts using for stable diffusion in Chinese language"}] + - [{"role": "user", "content": user_message}], - # system 中 定义回答问题的具体类型等 - temperature=0.5, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=["\n\n"], - ) - assistant_response = response.choices[0].message.content - - # 将助手回复添加到对应的会话中 - return assistant_response - -input_en = gr.inputs.Textbox(lines=5, placeholder="请输入你的英文提示语") -output_en = gr.outputs.Textbox() -iface_en = gr.Interface(fn=trans_en, inputs=input_en, outputs=output_en, title="提示词", description="sd提示词英文转中文") - -iface_cn.launch() -iface_en.launch() \ No newline at end of file diff --git a/spaces/enadewan/ASK_FREDDY_BY_CONTRUCTOR_LEARNING/README.md b/spaces/enadewan/ASK_FREDDY_BY_CONTRUCTOR_LEARNING/README.md deleted file mode 100644 index 9e8343a84bae7b19684051b6b2ec7ee5c3eca10e..0000000000000000000000000000000000000000 --- a/spaces/enadewan/ASK_FREDDY_BY_CONTRUCTOR_LEARNING/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ASK FREDDY -emoji: 🚀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -duplicated_from: enadewan/ASK_FREDDY ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/no-ssr.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/no-ssr.tsx deleted file mode 100644 index 2a9d74a873187dd2e2d2a83d2d179902fd16ea16..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/no-ssr.tsx +++ /dev/null @@ -1,31 +0,0 @@ -"use client"; -import { useEffect, useLayoutEffect, useState } from "react"; -import PropTypes from "prop-types"; - -const useEnhancedEffect = - typeof window !== "undefined" && process.env.NODE_ENV !== "test" - ? useLayoutEffect - : useEffect; - -const NoSSR = ({ - children, - defer = false, - fallback = null, -}: { - children: React.ReactNode; - defer?: boolean; - fallback?: React.ReactNode; -}) => { - const [isMounted, setMountedState] = useState(false); - - useEnhancedEffect(() => { - if (!defer) setMountedState(true); - }, [defer]); - - useEffect(() => { - if (defer) setMountedState(true); - }, [defer]); - - return isMounted ? children : fallback; -}; -export default NoSSR; diff --git a/spaces/evi0mo/vits-fastapi-server/text/japanese.py b/spaces/evi0mo/vits-fastapi-server/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/evi0mo/vits-fastapi-server/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/fatiXbelha/sd/Become a Professional Bus Driver with Bus Simulator 2023 APK Mod.md b/spaces/fatiXbelha/sd/Become a Professional Bus Driver with Bus Simulator 2023 APK Mod.md deleted file mode 100644 index 8f8b13b7e0172d01dee3314b1e79e21f7cab4883..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Become a Professional Bus Driver with Bus Simulator 2023 APK Mod.md +++ /dev/null @@ -1,114 +0,0 @@ -
        -

        Bus Simulator 2023 APK Mod: A Review

        -

        Do you love driving buses and exploring different cities? Do you want to experience the thrill and challenge of being a bus driver in a realistic simulation game? If yes, then you might want to check out Bus Simulator 2023, a new game that puts you in the driver's seat and lets you become a real bus driver. But wait, there's more. You can also download and install Bus Simulator 2023 APK Mod, a modified version of the game that gives you unlimited money, gems, and other benefits. Sounds tempting, right? But before you do that, let's take a closer look at what Bus Simulator 2023 is all about, how to download and install Bus Simulator 2023 APK Mod, and what are the pros and cons of using it.

        -

        What is Bus Simulator 2023?

        -

        Bus Simulator 2023 is a simulation game developed by Ovilex Software, a company that specializes in creating realistic driving games for mobile devices. The game was released in September 2022 and has received positive reviews from players and critics alike. The game features detailed maps all over the world, a wide variety of modern city buses, coach buses and school buses with realistic interiors and a groundbreaking 1:1 physics engine. The game also has a career mode, where you can earn money and reputation by completing various missions and challenges, such as transporting passengers, following traffic rules, avoiding accidents, and more. You can also customize your buses with different colors, decals, accessories, and upgrades. You can even create your own routes and share them with other players online.

        -

        bus simulator 2023 apk mod


        Download Ziphttps://urllie.com/2uNH3q



        -

        Features of Bus Simulator 2023

        -

        Bus Simulator 2023 has many features that make it one of the best bus simulation games on the market. Here are some of them:

        -

        Detailed maps

        -

        The game has over 50 maps from different countries and continents, such as Europe, America, Asia, Africa, and Australia. Each map has its own landmarks, scenery, weather, traffic, and pedestrians. You can explore famous cities like London, Paris, New York, Tokyo, Sydney, Cairo, and more. You can also drive on different types of roads, such as highways, rural roads, mountain roads, bridges, tunnels, etc.

        -

        Modern buses

        -

        The game has over 100 buses from different brands and models, such as Mercedes-Benz, Volvo, Scania, MAN, Setra, Neoplan, etc. Each bus has its own specifications, such as speed, acceleration, braking, fuel consumption, etc. Each bus also has a realistic interior with functional instruments, controls, mirrors, doors, windows, etc. You can switch between different camera views to get the best perspective of your driving.

        -

        Realistic physics

        -

        The game has a groundbreaking 1:1 physics engine that simulates the behavior of real buses in various situations. You can feel the weight and momentum of your bus as you accelerate, brake, turn, or collide with other vehicles or objects. You can also experience the effects of gravity, friction, air resistance , and other factors that affect your driving. You can also adjust the difficulty level and the realism settings to suit your preferences and skills.

        -

        How to download and install Bus Simulator 2023 APK Mod?

        -

        If you are interested in playing Bus Simulator 2023, you can download it from the Google Play Store or the App Store for free. However, if you want to enjoy some extra benefits, such as unlimited money, gems, and other resources, you can download and install Bus Simulator 2023 APK Mod, a modified version of the game that bypasses the original game's restrictions and limitations. Here's how to do it:

        -

        Requirements for Bus Simulator 2023 APK Mod

        -

        Before you download and install Bus Simulator 2023 APK Mod, you need to make sure that your device meets the following requirements:

        -

        Android version

        -

        You need to have an Android device that runs on Android 5.0 or higher. This is because the game uses advanced graphics and features that are not compatible with older versions of Android.

        -

        bus simulator 2023 apk mod unlimited money
        -bus simulator 2023 apk mod download free
        -bus simulator 2023 apk mod offline
        -bus simulator 2023 apk mod latest version
        -bus simulator 2023 apk mod android 1
        -bus simulator 2023 apk mod hack
        -bus simulator 2023 apk mod unlocked all buses
        -bus simulator 2023 apk mod revdl
        -bus simulator 2023 apk mod obb
        -bus simulator 2023 apk mod rexdl
        -bus simulator 2023 apk mod no ads
        -bus simulator 2023 apk mod data
        -bus simulator 2023 apk mod menu
        -bus simulator 2023 apk mod online
        -bus simulator 2023 apk mod unlimited fuel
        -bus simulator 2023 apk mod all maps
        -bus simulator 2023 apk mod gameplay
        -bus simulator 2023 apk mod full version
        -bus simulator 2023 apk mod premium
        -bus simulator 2023 apk mod pro
        -bus simulator 2023 apk mod vip
        -bus simulator 2023 apk mod mega
        -bus simulator 2023 apk mod update
        -bus simulator 2023 apk mod new features
        -bus simulator 2023 apk mod realistic physics
        -bus simulator 2023 apk mod cheats
        -bus simulator 2023 apk mod tips and tricks
        -bus simulator 2023 apk mod best settings
        -bus simulator 2023 apk mod graphics
        -bus simulator 2023 apk mod sound effects
        -bus simulator 2023 apk mod skins
        -bus simulator 2023 apk mod customizations
        -bus simulator 2023 apk mod missions
        -bus simulator 2023 apk mod challenges
        -bus simulator 2023 apk mod achievements
        -bus simulator 2023 apk mod leaderboards
        -bus simulator 2023 apk mod multiplayer
        -bus simulator 2023 apk mod co-op mode
        -bus simulator 2023 apk mod sandbox mode
        -bus simulator 2023 apk mod career mode
        -bus simulator 2023 apk mod free roam mode
        -bus simulator 2023 apk mod traffic mode
        -bus simulator 2023 apk mod weather mode
        -bus simulator 2023 apk mod night mode
        -bus simulator 2023 apk mod day mode
        -bus simulator 2023 apk mod seasons mode
        -bus simulator 2023 apk mod events mode
        -bus simulator 2023 apk mod festivals mode

        -

        Storage space

        -

        You need to have at least 1 GB of free storage space on your device. This is because the game has a large file size and requires additional data to run smoothly.

        -

        Internet connection

        -

        You need to have a stable internet connection to download and install Bus Simulator 2023 APK Mod. You also need to have an internet connection to play the game online and access its online features, such as multiplayer mode, leaderboards, achievements, etc.

        -

        Steps to download and install Bus Simulator 2023 APK Mod

        -

        Once you have checked the requirements, you can follow these steps to download and install Bus Simulator 2023 APK Mod:

        -

        Enable unknown sources

        -

        The first step is to enable unknown sources on your device. This is because Bus Simulator 2023 APK Mod is not available on the official app stores and is considered an unknown source by your device. To enable unknown sources, go to your device's settings, then security, then unknown sources, and toggle it on.

        -

        Download the APK file

        -

        The next step is to download the APK file of Bus Simulator 2023 APK Mod from a reliable website. You can search for it on Google or use this link to download it directly. Make sure that you download the latest version of the APK file that matches your device's specifications.

        -

        Install the APK file

        -

        The final step is to install the APK file on your device. To do this, locate the downloaded file in your device's file manager and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on install and wait for the installation process to finish. Once it is done, you can launch the game and enjoy Bus Simulator 2023 APK Mod.

        -

        Pros and cons of Bus Simulator 2023 APK Mod

        -

        Bus Simulator 2023 APK Mod has some advantages and disadvantages that you should be aware of before using it. Here are some of them:

        -

        Pros of Bus Simulator 2023 APK Mod

        -

        Bus Simulator 2023 APK Mod has some pros that make it appealing to many players. Here are some of them:

        -

        Free to play

        -

        One of the main pros of Bus Simulator 2023 APK Mod is that it is free to play. You don't have to spend any money to download or play the game. You can enjoy all the features and content of the game without any limitations or restrictions.

        -

        Unlimited money and gems

        -

        Another pro of Bus Simulator 2023 APK Mod is that it gives you unlimited money and gems in the game. Money and gems are the main currencies in the game that you can use to buy new buses, customize them, upgrade them, etc. With unlimited money and gems, you can buy anything you want in the game without worrying about running out of resources.

        -

        No ads and in-app purchases

        -

        A third pro of Bus Simulator 2023 APK Mod is that it removes all the ads and in-app purchases from the game. Ads and in-app purchases are annoying and distracting features that interrupt your gameplay and try to make you spend real money on the game. With Bus Simulator 2023 APK Mod, you can play the game without any interruptions or temptations.

        -

        Cons of Bus Simulator 2023 APK Mod

        -

        Bus Simulator 2023 APK Mod also has some cons that make it risky and problematic for some players. Here are some of them:

        -

        Risk of malware and viruses

        -

        One of the main cons of Bus Simulator 2023 APK Mod is that it poses a risk of malware and viruses to your device. Since Bus Simulator 2023 APK Mod is not an official version of the game and is downloaded from an unknown source, it may contain harmful or malicious code that can damage your device or steal your personal information. You should always be careful and cautious when downloading and installing any APK file from the internet and scan it with a reliable antivirus software before using it.

        -

        Possible compatibility issues

        -

        Another con of Bus Simulator 2023 APK Mod is that it may cause compatibility issues with your device or the original game. Since Bus Simulator 2023 APK Mod is a modified version of the game that changes some of its features and functions, it may not work properly or smoothly on your device or with the original game. You may experience crashes, glitches, errors, or other problems that can affect your gameplay or performance. You should always check the compatibility of the APK file with your device and the original game before downloading and installing it.

        -

        Legal and ethical concerns

        -

        A third con of Bus Simulator 2023 APK Mod is that it raises some legal and ethical concerns. Since Bus Simulator 2023 APK Mod is not an authorized or endorsed version of the game and violates the terms and conditions of the original game, it may infringe the intellectual property rights of the developers and publishers of the game. You may face legal consequences or penalties if you use Bus Simulator 2023 APK Mod without their permission or consent. You may also be considered as cheating or unfair by other players who play the game legitimately and honestly. You should always respect the rights and efforts of the creators and owners of the game and support them by playing the game legally and ethically.

        -

        Conclusion

        -

        Bus Simulator 2023 is a simulation game that lets you become a real bus driver and explore different cities and countries in a realistic environment. You can download and play the game for free from the official app stores, or you can download and install Bus Simulator 2023 APK Mod, a modified version of the game that gives you unlimited money, gems, and other benefits. However, you should also be aware of the pros and cons of using Bus Simulator 2023 APK Mod, such as free to play, unlimited resources, no ads, risk of malware, compatibility issues, and legal and ethical concerns. You should weigh the advantages and disadvantages carefully before deciding whether to use Bus Simulator 2023 APK Mod or not.

        -

        FAQs

        -

        Here are some frequently asked questions about Bus Simulator 2023 APK Mod:

        -

        Q: Is Bus Simulator 2023 APK Mod safe to use?

        -

        A: Bus Simulator 2023 APK Mod is not guaranteed to be safe to use, as it may contain malware or viruses that can harm your device or personal information. You should always scan any APK file with a reliable antivirus software before using it.

        -

        Q: Is Bus Simulator 2023 APK Mod compatible with my device?

        -

        A: Bus Simulator 2023 APK Mod may not be compatible with your device, as it may require a higher Android version or more storage space than your device has. You should always check the compatibility of the APK file with your device before downloading and installing it.

        -

        Q: Is Bus Simulator 2023 APK Mod legal to use?

        -

        A: Bus Simulator 2023 APK Mod is not legal to use, as it violates the terms and conditions of the original game and infringes the intellectual property rights of the developers and publishers of the game. You may face legal consequences or penalties if you use Bus Simulator 2023 APK Mod without their permission or consent.

        -

        Q: Is Bus Simulator 2023 APK Mod ethical to use?

        -

        A: Bus Simulator 2023 APK Mod is not ethical to use, as it gives you an unfair advantage over other players who play the game legitimately and honestly. You may be considered as cheating or dishonest by other players if you use Bus Simulator 2023 APK Mod.

        -

        Q: Where can I download Bus Simulator 2023 APK Mod?

        -

        A: You can download Bus Simulator 2023 APK Mod from various websites on the internet, such as this link. However, you should always be careful and cautious when downloading any APK file from the internet, as it may contain harmful or malicious code.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Car Music for Free The Top Sites and Apps to Find Royalty-Free Tracks.md b/spaces/fatiXbelha/sd/Download Car Music for Free The Top Sites and Apps to Find Royalty-Free Tracks.md deleted file mode 100644 index 17c7266be97b45994bb021e86696bd1dfc0a9c2c..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Car Music for Free The Top Sites and Apps to Find Royalty-Free Tracks.md +++ /dev/null @@ -1,113 +0,0 @@ -
        -

        Music Car Download: How to Enjoy Your Favorite Songs on the Road

        -

        Do you love listening to music while driving your car? Do you want to have access to a wide range of songs and genres that suit your mood and taste? Do you want to avoid the hassle of switching between radio stations or CDs? If you answered yes to any of these questions, then you need music car download.

        -

        music car download


        Download ===> https://urllie.com/2uNzwW



        -

        Introduction

        -

        What is music car download?

        -

        Music car download is a term that refers to downloading or streaming music for your car using online services or platforms. It allows you to enjoy your favorite songs on the road without relying on traditional media sources. You can choose from various options, such as royalty-free music, curated playlists, or background music for your car videos.

        -

        Why do you need music car download?

        -

        Music car download has many benefits, such as:

        -
          -
        • It enhances your driving experience by providing you with high-quality sound and diverse music choices.
        • -
        • It saves you time and money by eliminating the need to buy CDs or pay for radio subscriptions.
        • -
        • It gives you more control over your music selection by allowing you to customize your playlists and skip songs you don't like.
        • -
        • It helps you avoid distractions and boredom by keeping you entertained and engaged while driving.
        • -
        -

        How to choose the best music car download service?

        -

        There are many factors to consider when choosing a music car download service, such as:

        -

        music car download free mp3
        -music car download app for android
        -music car download online
        -music car download sites
        -music car download software
        -music car download playlist
        -music car download youtube
        -music car download spotify
        -music car download soundcloud
        -music car download usb
        -music car download bluetooth
        -music car download iphone
        -music car download windows 10
        -music car download mac
        -music car download linux
        -music car download best quality
        -music car download legal
        -music car download no ads
        -music car download unlimited
        -music car download subscription
        -music car download royalty free
        -music car download background
        -music car download video
        -music car download instrumental
        -music car download genre
        -music car download rock
        -music car download pop
        -music car download rap
        -music car download hip hop
        -music car download edm
        -music car download jazz
        -music car download classical
        -music car download country
        -music car download reggae
        -music car download metal
        -music car download indie
        -music car download r&b
        -music car download soul
        -music car download funk
        -music car download blues
        -music car download gospel
        -music car download latin
        -music car download arabic
        -music car download bollywood
        -music car download kpop
        -music car download jpop
        -music car download cpop
        -music car download afrobeat
        -music car download dancehall
        -music car download reggaeton

        -
          -
        • The cost and availability of the service in your region.
        • -
        • The quality and quantity of the music offered by the service.
        • -
        • The compatibility and convenience of the service with your car's audio system.
        • -
        • The features and functions of the service, such as offline mode, shuffle, repeat, etc.
        • -
        -

        Main Body

        -

        Top 3 music car download services in 2023

        -

        In this section, we will review the top 3 music car download services in 2023, based on their popularity, performance, and user feedback. These are:

        -

        Pixabay Music

        -

        Pixabay Music is a free online platform that offers over 900 royalty-free audio tracks and instrumentals for your car. You can download them in MP3 format and use them for any purpose, such as personal or commercial use. You can also browse through different categories, such as trailer, sport, stylish, action, etc., to find the best music for your car.

        -

        Spotify Car Music Playlist

        -

        Spotify Car Music Playlist is a curated playlist by Spotify that features 59 songs that are perfect for driving. You can stream or download them using the Spotify app on your smartphone or tablet. You can also enjoy other benefits, such as ad-free listening, unlimited skips, offline mode, etc., if you subscribe to Spotify Premium. The playlist includes genres such as rock, hip hop, pop, etc., and artists such as QubeSounds, FASSounds, ComaStudio, etc.

        -

        Storyblocks Background Music for Car Video

        -

        Storyblocks Background Music for Car Video is a subscription-based service that provides you with unlimited access to stock music for your car videos. You can choose from thousands of tracks that are suitable for different moods and themes, such as background music for adventure, travel, fun, etc. You can download them in MP3 or WAV format and use them for your personal or commercial projects. You can also filter the tracks by genre, mood, instrument, tempo, etc., to find the best music for your car video.

        -

        How to download music for your car using these services?

        -

        In this section, we will explain how to download music for your car using these services. The steps are:

        -

        Pixabay Music: Download royalty-free audio tracks and instrumentals

        -
          -
        1. Go to the Pixabay Music website and create a free account or log in with your existing one.
        2. -
        3. Browse through the categories or use the search bar to find the music you want.
        4. -
        5. Click on the download button next to the track and choose the quality you prefer.
        6. -
        7. Save the file to your device and transfer it to your car's audio system via USB, Bluetooth, or other methods.
        8. -
        -

        Spotify Car Music Playlist: Stream or download songs from the curated playlist

        -
          -
        1. Download the Spotify app on your smartphone or tablet and create a free account or log in with your existing one.
        2. -
        3. Search for Spotify Car Music Playlist or use this link to access it.
        4. -
        5. Tap on the play button to stream the songs or tap on the download button to save them offline.
        6. -
        7. Connect your device to your car's audio system via AUX, Bluetooth, or other methods and enjoy the music.
        8. -
        -

        Storyblocks Background Music for Car Video: Subscribe and download stock music for your car videos

        -
          -
        1. Go to the Storyblocks Background Music for Car Video website and choose a subscription plan that suits your needs.
        2. -
        3. Browse through the tracks or use the filters to find the music you want.
        4. -
        5. Click on the download button next to the track and choose the format you prefer.
        6. -
        7. Save the file to your device and use it for your car video project.
        8. -
        -

        Conclusion

        -

        Summary of the main points

        -

        In this article, we have discussed what is music car download, why do you need it, how to choose the best service, and how to download music for your car using three popular services: Pixabay Music, Spotify Car Music Playlist, and Storyblocks Background Music for Car Video. We have also provided you with some links and references for further information.

        -

        Call to action

        -

        If you are interested in music car download, we encourage you to try out these services and see for yourself how they can enhance your driving experience. You can also share your feedback and suggestions with us in the comments section below. Thank you for reading and happy driving!

        - FAQs Q: What is the difference between royalty-free music and stock music? A: Royalty-free music means that you can use the music without paying any royalties or fees to the original creator. Stock music means that you can use the music for a specific purpose or project by paying a one-time fee or subscription. Q: How can I find out if a music car download service is legal and safe? A: You can check the terms and conditions of the service, look for reviews and ratings from other users, and verify the source and quality of the music. Q: How can I improve the sound quality of my car's audio system? A: You can try some tips such as adjusting the equalizer settings, upgrading your speakers or amplifier, adding a subwoofer or soundproofing material, etc. Q: How can I make my own music car playlist? A: You can use online tools such as Playlist Maker or Playlist Generator to create your own custom playlist based on your preferences and criteria. Q: How can I edit my car video with background music? A: You can use online tools such as Kapwing or Clipchamp to edit your car video with background music. You can also add effects, transitions, text, etc., to make it more appealing.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Chess via Bluetooth APK and Play with Other Chess Lovers.md b/spaces/fatiXbelha/sd/Download Chess via Bluetooth APK and Play with Other Chess Lovers.md deleted file mode 100644 index 455cdf3171eb18d175af9ded73e9847943634210..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Chess via Bluetooth APK and Play with Other Chess Lovers.md +++ /dev/null @@ -1,60 +0,0 @@ -
        -

        Chess APK Multiplayer Bluetooth: How to Play Chess with Friends Offline

        | | H2: Introduction |

        Introduction

        Chess is one of the oldest and most popular board games in the world. It is a game of strategy, logic, and skill that can challenge your mind and test your creativity. But what if you want to play chess with your friends or family without an internet connection? Is there a way to enjoy this classic game offline via bluetooth?

        The answer is yes! There are several chess apps that allow you to play chess with other players offline via bluetooth. In this article, we will review some of the best chess apk multiplayer bluetooth apps that you can download for free on your Android device. We will also show you how to use these apps to start a chess game with your friends or family in a few simple steps.

        -

        chess apk multiplayer bluetooth


        Download Ziphttps://urllie.com/2uNI2O



        | | H2: What is Chess APK Multiplayer Bluetooth? |

        What is Chess APK Multiplayer Bluetooth?

        Chess APK Multiplayer Bluetooth is a term that refers to any chess app that lets you play chess with other players offline via bluetooth. Bluetooth is a wireless technology that enables devices to communicate with each other over short distances. By using bluetooth, you can connect your device with another device that has the same chess app installed and start a chess game without needing an internet connection.

        There are many benefits of playing chess offline via bluetooth. For example, you can save your data usage, avoid ads and interruptions, play anytime and anywhere, and have more control over the game settings. You can also enjoy a more personal and interactive experience with your opponent, as you can see their reactions and expressions during the game.

        | | H2: How to Choose a Good Chess APK Multiplayer Bluetooth App? |

        How to Choose a Good Chess APK Multiplayer Bluetooth App?

        There are many chess apps that claim to offer multiplayer bluetooth features, but not all of them are reliable and user-friendly. To help you choose a good chess apk multiplayer bluetooth app, here are some factors that you should consider:

        • The app should be compatible with your device and operating system.
        • The app should have good ratings and reviews from other users.
        • The app should have clear instructions and easy-to-use interface.
        • The app should have various game modes and difficulty levels.
        • The app should have customizable board and pieces.
        • The app should have sound effects and animations.
        | | H2: What are Some of the Best Chess APK Multiplayer Bluetooth Apps? |

        What are Some of the Best Chess APK Multiplayer Bluetooth Apps?

        To save you some time and effort, we have selected some of the best chess apk multiplayer bluetooth apps that you can download for free on your Android device. Here are our top picks:

        | | H3: Bluetooth Chess |

        Bluetooth Chess

        Bluetooth Chess

        This app lets you play chess with friends or family offline via bluetooth. You just need to install the app, click multiplayer, and choose a username. Then, someone creates a group and someone else joins the group. You can tap on the name of the group owner and connect automatically through bluetooth. When someone is connected, the group owner can start the game. You can also play chess single-player with I have continued writing the article based on the outline and the web search results. Here is the updated table: | Outline of the article | Article with HTML formatting | | --- | --- | | H1: Chess APK Multiplayer Bluetooth: How to Play Chess with Friends Offline |

        Chess APK Multiplayer Bluetooth: How to Play Chess with Friends Offline

        | | H2: Introduction |

        Introduction

        Chess is one of the oldest and most popular board games in the world. It is a game of strategy, logic, and skill that can challenge your mind and test your creativity. But what if you want to play chess with your friends or family without an internet connection? Is there a way to enjoy this classic game offline via bluetooth?

        The answer is yes! There are several chess apps that allow you to play chess with other players offline via bluetooth. In this article, we will review some of the best chess apk multiplayer bluetooth apps that you can download for free on your Android device. We will also show you how to use these apps to start a chess game with your friends or family in a few simple steps.

        | | H2: What is Chess APK Multiplayer Bluetooth? |

        What is Chess APK Multiplayer Bluetooth?

        Chess APK Multiplayer Bluetooth is a term that refers to any chess app that lets you play chess with other players offline via bluetooth. Bluetooth is a wireless technology that enables devices to communicate with each other over short distances. By using bluetooth, you can connect your device with another device that has the same chess app installed and start a chess game without needing an internet connection.

        -

        chess bluetooth multiplayer apk download
        -chess game apk with bluetooth multiplayer
        -chess apk for android multiplayer via bluetooth
        -chess apk offline multiplayer bluetooth
        -chess apk free multiplayer bluetooth
        -chess apk mod multiplayer bluetooth
        -chess apk 3d multiplayer bluetooth
        -chess apk online multiplayer bluetooth
        -chess apk pro multiplayer bluetooth
        -chess apk latest version multiplayer bluetooth
        -chess apk hack multiplayer bluetooth
        -chess apk unlimited money multiplayer bluetooth
        -chess apk premium multiplayer bluetooth
        -chess apk full version multiplayer bluetooth
        -chess apk cracked multiplayer bluetooth
        -chess apk no ads multiplayer bluetooth
        -chess apk best multiplayer bluetooth
        -chess apk hd multiplayer bluetooth
        -chess apk real time multiplayer bluetooth
        -chess apk classic multiplayer bluetooth
        -chess apk modern multiplayer bluetooth
        -chess apk fun multiplayer bluetooth
        -chess apk easy multiplayer bluetooth
        -chess apk hard multiplayer bluetooth
        -chess apk challenge multiplayer bluetooth
        -chess apk tutorial multiplayer bluetooth
        -chess apk tips multiplayer bluetooth
        -chess apk tricks multiplayer bluetooth
        -chess apk guide multiplayer bluetooth
        -chess apk review multiplayer bluetooth
        -chess apk rating multiplayer bluetooth
        -chess apk feedback multiplayer bluetooth
        -chess apk support multiplayer bluetooth
        -chess apk update multiplayer bluetooth
        -chess apk new features multiplayer bluetooth
        -chess apk bug fixes multiplayer bluetooth
        -chess apk improvement multiplayer bluetooth
        -chess apk performance multiplayer bluetooth
        -chess apk compatibility multiplayer bluetooth
        -chess apk security multiplayer bluetooth
        -chess apk privacy multiplayer bluetooth
        -chess apk terms and conditions multiplayer bluetooth
        -chess apk refund policy multiplayer bluetooth
        -chess apk contact us multiplayer bluetooth
        -chess apk faq multiplayer bluetooth
        -chess apk help center multiplayer bluetooth
        -chess apk developer team multiplayer bluetooth

        There are many benefits of playing chess offline via bluetooth. For example, you can save your data usage, avoid ads and interruptions, play anytime and anywhere, and have more control over the game settings. You can also enjoy a more personal and interactive experience with your opponent, as you can see their reactions and expressions during the game.

        | | H2: How to Choose a Good Chess APK Multiplayer Bluetooth App? |

        How to Choose a Good Chess APK Multiplayer Bluetooth App?

        There are many chess apps that claim to offer multiplayer bluetooth features, but not all of them are reliable and user-friendly. To help you choose a good chess apk multiplayer bluetooth app, here are some factors that you should consider:

        • The app should be compatible with your device and operating system.
        • The app should have good ratings and reviews from other users.
        • The app should have clear instructions and easy-to-use interface.
        • The app should have various game modes and difficulty levels.
        • The app should have customizable board and pieces.
        • The app should have sound effects and animations.
        | | H2: What are Some of the Best Chess APK Multiplayer Bluetooth Apps? |

        What are Some of the Best Chess APK Multiplayer Bluetooth Apps?

        To save you some time and effort, we have selected some of the best chess apk multiplayer bluetooth apps that you can download for free on your Android device. Here are our top picks:

        | | H3: Bluetooth Chess |

        Bluetooth Chess

        Bluetooth Chess

        This app lets you play chess with friends or family offline via bluetooth. You just need to install the app, click multiplayer, and choose a username. Then, someone creates a group and someone else joins the group. You can tap on the name of the group owner and connect automatically through bluetooth. When someone is connected, the group owner can start the game. You can also play chess single-player with CPU. There are three difficulty levels to choose from: easy, medium, and hard. The app has various board themes, 2D & 3D pieces, sound effects, and animations. The app has good ratings and reviews from other users. The app is compatible with Android 4.4 and up.

        -| H3: Chesser - bluetooth chess |

        Chesser - bluetooth chess

        Chesser - bluetooth chess

        This app allows you to play chess via bluetooth with two devices. You need to make sure that bluetooth is on and your device is paired with the opponent's device. In the app, you can hit "host a game" if you want to be the white player or "join a game" if you want to be the black player. First, the hosting player has to open the server and then the other player has to choose the host's device name from the list of available devices. The app has a simple and elegant design, with a wooden board and realistic pieces. You can also play chess against the computer with three levels of difficulty: easy, normal, and hard. The app has a timer, a move history, and an undo option. The app is compatible with Android 4.1 and up.

        | | H3: MultiplayerChess via Bluetooth |

        MultiplayerChess via Bluetooth

        MultiplayerChess via Bluetooth

        This app enables you to play chess with another player via bluetooth connection. You need to turn on bluetooth and pair your device with the other device before opening the app. In the app, you can choose to be the host or the client. The host will be the white player and the client will be the black player. The host will create a server and the client will scan for the server. Once connected, the game will start automatically. You can also play chess against the AI with four levels of difficulty: easy, medium, hard, and expert. The app has a minimalist and colorful design, with a 2D board and pieces. You can also change the board color and the piece style. The app has sound effects, a move counter, and a restart option. The app is compatible with Android 4.0 and up.

        | | H2: How to Play Chess Offline via Bluetooth? |

        How to Play Chess Offline via Bluetooth?

        Now that you have downloaded one of the chess apk multiplayer bluetooth apps, you might be wondering how to play chess offline via bluetooth. Here are some general steps that you can follow:

        1. Make sure that both devices have bluetooth turned on and are paired with each other.
        2. Open the same chess app on both devices.
        3. Select multiplayer mode and choose a username.
        4. One device will create a group or a server and the other device will join the group or scan for the server.
        5. When both devices are connected, the game will start.
        6. Enjoy playing chess offline via bluetooth!
        | | H2: Conclusion |

        Conclusion

        Playing chess offline via bluetooth is a great way to have fun and challenge your friends or family without an internet connection. You can use any of the chess apk multiplayer bluetooth apps that we have reviewed in this article to start a chess game in a few simple steps. These apps are free, easy to use, and offer various features and options to enhance your chess experience. So what are you waiting for? Download one of these apps today and enjoy playing chess offline via bluetooth!

        | | H2: FAQs |

        FAQs

        Here are some frequently asked questions about chess apk multiplayer bluetooth:

        | | H3: Q: Do I need an internet connection to play chess offline via bluetooth? |

        Q: Do I need an internet connection to play chess offline via bluetooth?

        | | H4: A: No, you don't need an internet connection to play chess offline via bluetooth. You only need to have bluetooth enabled and paired with another device that has the same chess app installed. |

        A: No, you don't need an internet connection to play chess offline via bluetooth. You only need to have bluetooth enabled and paired with another device that has the same chess app installed.

        -| H3: Q: How many players can play chess offline via bluetooth? |

        Q: How many players can play chess offline via bluetooth?

        -| H4: A: Most of the chess apk multiplayer bluetooth apps allow only two players to play chess offline via bluetooth. However, some apps may support more players in the future. |

        A: Most of the chess apk multiplayer bluetooth apps allow only two players to play chess offline via bluetooth. However, some apps may support more players in the future.

        -| H3: Q: Can I play chess offline via bluetooth with different devices? |

        Q: Can I play chess offline via bluetooth with different devices?

        -| H4: A: Yes, you can play chess offline via bluetooth with different devices as long as they have the same chess app installed and are compatible with each other. For example, you can play chess offline via bluetooth with an Android phone and an Android tablet, but not with an Android phone and an iPhone. |

        A: Yes, you can play chess offline via bluetooth with different devices as long as they have the same chess app installed and are compatible with each other. For example, you can play chess offline via bluetooth with an Android phone and an Android tablet, but not with an Android phone and an iPhone.

        | | H3: Q: How can I improve my chess skills offline via bluetooth? |

        Q: How can I improve my chess skills offline via bluetooth?

        -| H4: A: Playing chess offline via bluetooth can help you improve your chess skills by practicing with different opponents and challenging yourself with different difficulty levels. You can also learn from your mistakes by reviewing your move history and undoing your moves. Additionally, you can read some chess books, watch some chess videos, or take some chess lessons online to improve your chess knowledge and strategies. |

        A: Playing chess offline via bluetooth can help you improve your chess skills by practicing with different opponents and challenging yourself with different difficulty levels. You can also learn from your mistakes by reviewing your move history and undoing your moves. Additionally, you can read some chess books, watch some chess videos, or take some chess lessons online to improve your chess knowledge and strategies.

        -| H3: Q: What are some of the advantages and disadvantages of playing chess offline via bluetooth? |

        Q: What are some of the advantages and disadvantages of playing chess offline via bluetooth?

        -| H4: A: Some of the advantages of playing chess offline via bluetooth are:

        • You can save your data usage and avoid ads and interruptions.
        • You can play anytime and anywhere without an internet connection.
        • You can have more control over the game settings and options.
        • You can enjoy a more personal and interactive experience with your opponent.

        Some of the disadvantages of playing chess offline via bluetooth are:

        • You need to have a compatible device and a paired device with the same chess app installed.
        • You may experience some connection issues or delays during the game.
        • You may not have access to some features or updates that require an internet connection.
        • You may not be able to play with other players online or join tournaments or leagues.
        |

        A: Some of the advantages of playing chess offline via bluetooth are:

        • You can save your data usage and avoid ads and interruptions.
        • You can play anytime and anywhere without an internet connection.
        • You can have more control over the game settings and options.
        • You can enjoy a more personal and interactive experience with your opponent.

        Some of the disadvantages of playing chess offline via bluetooth are:

        • You need to have a compatible device and a paired device with the same chess app installed.
        • You may experience some connection issues or delays during the game.
        • You may not have access to some features or updates that require an internet connection.
        • You may not be able to play with other players online or join tournaments or leagues.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Fanuc-Kfloppy-1.md b/spaces/fatiXbelha/sd/Fanuc-Kfloppy-1.md deleted file mode 100644 index 6ba9eaf35035d9488e9ada2c1d4d539a627b2f1f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Fanuc-Kfloppy-1.md +++ /dev/null @@ -1,118 +0,0 @@ -## Fanuc Kfloppy 1 - - - - - - - - - -**Download --->>> [https://tweeat.com/2txiy2](https://tweeat.com/2txiy2)** - - - - - - - - - - - - - -# What is Fanuc Kfloppy 1 and How to Use It? - - - -Fanuc Kfloppy 1 is a software tool that allows you to backup and restore programs and system files from a Fanuc robot controller using a PC with a serial port. It is especially useful for older models of Fanuc robots, such as the R-J2 series, that do not have USB or Ethernet ports. - - - -To use Fanuc Kfloppy 1, you need to have a PC running DOS or a DOS emulator, such as VMWare or DOSBox. You also need a serial cable that connects the PC to the robot controller. The cable should have a 25-pin male connector on one end and a 9-pin female connector on the other end. You can find the pinout diagram for the cable in the Fanuc manuals. - - - -Once you have the PC and the cable ready, you need to install Fanuc Kfloppy 1 on your hard drive. You can obtain the software from Fanuc or from online sources, such as Robotforum or SoundCloud. After installing the software, you need to configure the serial port settings on your PC and on the robot controller. The settings should match each other and be as follows: - - - -- Baud rate: 9600 - -- Data bits: 8 - -- Parity: None - -- Stop bits: 1 - -- Flow control: None - - - -Next, you need to run Fanuc Kfloppy 1 on your PC by typing KFLOPPY at the DOS prompt. You will see a menu with four options: Backup, Restore, Format and Exit. To backup your programs and system files from the robot controller, choose Backup and follow the instructions on the screen. You will be asked to enter a file name for each file you want to backup. The files will be saved in your current directory with a .PE extension. - - - -To restore your programs and system files to the robot controller, choose Restore and follow the instructions on the screen. You will be asked to select a file name for each file you want to restore. The files should have a .PE extension and be located in your current directory. The files will be transferred to the robot controller and overwrite any existing files with the same name. - - - -To format your PC's hard drive as a virtual floppy disk for the robot controller, choose Format and follow the instructions on the screen. This option will erase all data on your hard drive and create a new partition with FAT16 file system. The partition size will be 1.44 MB, which is equivalent to a standard floppy disk. You can use this option if you want to use your PC as a floppy drive for the robot controller. - - - -To exit Fanuc Kfloppy 1, choose Exit and press Enter. You can then turn off your PC and disconnect the serial cable from the robot controller. - - - -Fanuc Kfloppy 1 is a handy tool that can help you backup and restore your Fanuc robot programs and system files easily and quickly. However, it is important to use it with caution and follow the instructions carefully. Otherwise, you may damage your PC or your robot controller. - - - -## Why Choose Fanuc Robots? - - - -Fanuc robots are among the most popular and trusted industrial robots in the world. They offer many advantages for manufacturers who want to improve their productivity, quality and safety. Some of the benefits of choosing Fanuc robots are: - - - -- Wide range of models: Fanuc has over 100 robot models to suit any application and industry. Whether you need a small and fast robot for assembly, a large and strong robot for material handling, a precise and flexible robot for welding, or a collaborative and safe robot for human-robot interaction, Fanuc has a robot for you. - -- High performance: Fanuc robots are designed to deliver high speed, accuracy and repeatability. They can handle complex tasks with ease and efficiency. They also have intelligent features such as vision, force sensing and learning vibration control that enhance their capabilities and adaptability. - -- Easy integration: Fanuc robots are compatible with various devices and systems, such as PLCs, cameras, sensors, grippers and tools. They can communicate with other machines and robots via Ethernet or fieldbus networks. They also have user-friendly software and interfaces that make programming and operation simple and intuitive. - -- Reliability and durability: Fanuc robots are built to last and perform in harsh environments. They have robust mechanical structures and components that resist wear and tear. They also have advanced energy management systems that optimize their power consumption and reduce their environmental impact. - -- Support and service: Fanuc provides comprehensive support and service for its robots worldwide. It has a network of authorized distributors and integrators that can assist you with installation, training, maintenance and troubleshooting. It also offers online resources, such as manuals, videos, FAQs and forums, that can help you solve any issues or questions you may have. - - - -## How to Get Started with Fanuc Robots? - - - -If you are interested in getting started with Fanuc robots, here are some steps you can follow: - - - -1. Define your goals: Identify what you want to achieve with automation, such as increasing output, reducing costs, improving quality or enhancing safety. Also consider your budget, space, timeline and workforce needs. - -2. Find your robot: Use the Fanuc robot finder tool to search for the best robot model for your application and industry. You can filter by payload, reach, axis number, mounting position and other criteria. You can also compare different models and see their specifications and features. - -3. Contact your local distributor: Find your nearest Fanuc authorized distributor or integrator using the Fanuc global website. They can provide you with more information about the robot models, options and accessories you need. They can also help you with quotation, ordering, delivery and installation. - -4. Learn how to use your robot: Attend a training course offered by Fanuc or your distributor to learn how to program and operate your robot. You can also access online tutorials, videos and manuals on the Fanuc website or the Robotforum community. You can also contact Fanuc technical support or your distributor for any assistance or queries. - -5. Enjoy the benefits of automation: Once your robot is up and running, you can start enjoying the benefits of automation for your manufacturing process. You can monitor your robot's performance using the FANUC Zero Down Time application or the FANUC FIELD system. You can also update your robot's software or add new features as needed. - - - - dfd1c89656 - - - - - diff --git a/spaces/fengmuxi/ChatGpt-Web/app/locales/jp.ts b/spaces/fengmuxi/ChatGpt-Web/app/locales/jp.ts deleted file mode 100644 index 44cb8de2523449b9f9b09c439414deb169f64c9f..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/locales/jp.ts +++ /dev/null @@ -1,272 +0,0 @@ -import { SubmitKey } from "../store/config"; -import type { LocaleType } from "./index"; - -const jp: LocaleType = { - WIP: "この機能は開発中です……", - Error: { - Unauthorized: - "現在は未承認状態です。左下の設定ボタンをクリックし、アクセスパスワードを入力してください。", - }, - ChatItem: { - ChatItemCount: (count: number) => `${count} 通のチャット`, - }, - Chat: { - SubTitle: (count: number) => `ChatGPTとの ${count} 通のチャット`, - Actions: { - ChatList: "メッセージリストを表示", - CompressedHistory: "圧縮された履歴プロンプトを表示", - Export: "チャット履歴をエクスポート", - Copy: "コピー", - Stop: "停止", - Retry: "リトライ", - Delete: "Delete", - }, - Rename: "チャットの名前を変更", - Typing: "入力中…", - Input: (submitKey: string) => { - var inputHints = `${submitKey} で送信`; - if (submitKey === String(SubmitKey.Enter)) { - inputHints += ",Shift + Enter で改行"; - } - return inputHints + ",/ で自動補完をトリガー"; - }, - Send: "送信", - Config: { - Reset: "重置默认", - SaveAs: "另存为面具", - }, - }, - Export: { - Title: "チャット履歴をMarkdown形式でエクスポート", - Copy: "すべてコピー", - Download: "ファイルをダウンロード", - MessageFromYou: "あなたからのメッセージ", - MessageFromChatGPT: "ChatGPTからのメッセージ", - }, - Memory: { - Title: "履歴メモリ", - EmptyContent: "まだ記憶されていません", - Send: "メモリを送信", - Copy: "メモリをコピー", - Reset: "チャットをリセット", - ResetConfirm: - "リセット後、現在のチャット履歴と過去のメモリがクリアされます。リセットしてもよろしいですか?", - }, - Home: { - NewChat: "新しいチャット", - DeleteChat: "選択したチャットを削除してもよろしいですか?", - DeleteToast: "チャットが削除されました", - Revert: "元に戻す", - }, - User:{ - Title: "利用者", - SubTitle: "ユーザー情報インターフェイス", - Login:"ログイン", - LoginTitle:"ユーザーがログオンする", - Register:"入る", - RegisterTitle:"新しいユーザーを登録する", - Findpwd:"パスワードを回復する", - FindpwdTitle:"アカウントのパスワードを入力すると、メールに送信されます", - Name:"ユーザー名", - Wallet:"ユーザークレジット", - Mail:"ユーザー メールボックス", - SigState:"チェックイン状況", - Ststus:"ログアウトする", - Vip:"メンバー", - kami:"引き換えコード", - NickName:"ニックネーム", - User:"口座番号(番号のみ)", - Password:"パスワード(最低6桁)", - Email:"メールボックス", - Code:"キャプチャ", - Pass:{ - Title:"修改密码", - OldPwd:"旧密码", - NewPwd:"新密码", - NewPwd1:"确认密码" - }, - Save:"保存" - }, - Settings: { - Title: "設定", - SubTitle: "設定オプション", - Actions: { - ClearAll: "すべてのデータをクリア", - ResetAll: "すべてのオプションをリセット", - Close: "閉じる", - ConfirmResetAll: "すべての設定をリセットしてもよろしいですか?", - ConfirmClearAll: "すべてのチャットをリセットしてもよろしいですか?", - }, - Lang: { - Name: "Language", - All: "所有语言", - Options: { - cn: "简体中文", - en: "English", - tw: "繁體中文", - es: "Español", - it: "Italiano", - tr: "Türkçe", - jp: "日本語", - de: "Deutsch", - }, - }, - Avatar: "アバター", - FontSize: { - Title: "フォントサイズ", - SubTitle: "チャット内容のフォントサイズ", - }, - - Update: { - Version: (x: string) => `現在のバージョン:${x}`, - IsLatest: "最新バージョンです", - CheckUpdate: "アップデートを確認", - IsChecking: "アップデートを確認しています...", - FoundUpdate: (x: string) => `新しいバージョンが見つかりました:${x}`, - GoToUpdate: "更新する", - }, - SendKey: "送信キー", - Theme: "テーマ", - TightBorder: "ボーダーレスモード", - SendPreviewBubble: { - Title: "プレビューバブルの送信", - SubTitle: "在预览气泡中预览 Markdown 内容", - }, - Mask: { - Title: "面具启动页", - SubTitle: "新建聊天时,展示面具启动页", - }, - Prompt: { - Disable: { - Title: "プロンプトの自動補完を無効にする", - SubTitle: - "入力フィールドの先頭に / を入力すると、自動補完がトリガーされます。", - }, - List: "カスタムプロンプトリスト", - ListCount: (builtin: number, custom: number) => - `組み込み ${builtin} 件、ユーザー定義 ${custom} 件`, - Edit: "編集", - Modal: { - Title: "プロンプトリスト", - Add: "新規追加", - Search: "プロンプトワード検索", - }, - EditModal: { - Title: "编辑提示词", - }, - }, - HistoryCount: { - Title: "履歴メッセージ数を添付", - SubTitle: "リクエストごとに添付する履歴メッセージ数", - }, - CompressThreshold: { - Title: "履歴メッセージの長さ圧縮しきい値", - SubTitle: - "圧縮されていない履歴メッセージがこの値を超えた場合、圧縮が行われます。", - }, - Token: { - Title: "APIキー", - SubTitle: "自分のキーを使用してパスワードアクセス制限を迂回する", - Placeholder: "OpenAI APIキー", - }, - Usage: { - Title: "残高照会", - SubTitle(used: any, total: any) { - return `今月は $${used} を使用しました。総額は $${total} です。`; - }, - IsChecking: "確認中...", - Check: "再確認", - NoAccess: "APIキーまたはアクセスパスワードを入力して残高を表示", - }, - AccessCode: { - Title: "アクセスパスワード", - SubTitle: "暗号化アクセスが有効になっています", - Placeholder: "アクセスパスワードを入力してください", - }, - Bot: "AIベンダー (bot)", - Model: "モデル (model)", - Temperature: { - Title: "ランダム性 (temperature)", - SubTitle: - "値が大きいほど、回答がランダムになります。1以上の値には文字化けが含まれる可能性があります。", - }, - MaxTokens: { - Title: "シングルレスポンス制限 (max_tokens)", - SubTitle: "1回のインタラクションで使用される最大トークン数", - }, - PresencePenalty: { - Title: "トピックの新鮮度 (presence_penalty)", - SubTitle: "値が大きいほど、新しいトピックへの展開が可能になります。", - }, - }, - Store: { - DefaultTopic: "新しいチャット", - BotHello: "何かお手伝いできることはありますか", - Error: "エラーが発生しました。しばらくしてからやり直してください。", - Prompt: { - History: (content: string) => - "これは、AI とユーザの過去のチャットを要約した前提となるストーリーです:" + - content, - Topic: - "4~5文字でこの文章の簡潔な主題を返してください。説明、句読点、感嘆詞、余分なテキストは無しで。もし主題がない場合は、「おしゃべり」を返してください", - Summarize: - "あなたとユーザの会話を簡潔にまとめて、後続のコンテキストプロンプトとして使ってください。200字以内に抑えてください。", - }, - }, - Copy: { - Success: "クリップボードに書き込みました", - Failed: "コピーに失敗しました。クリップボード許可を与えてください。", - }, - Context: { - Toast: (x: any) => `前置コンテキストが ${x} 件設定されました`, - Edit: "前置コンテキストと履歴メモリ", - Add: "新規追加", - }, - Plugin: { Name: "插件" }, - Mask: { - Name: "面具", - Page: { - Title: "预设角色面具", - SubTitle: (count: number) => `${count} 个预设角色定义`, - Search: "搜索角色面具", - Create: "新建", - }, - Item: { - Info: (count: number) => `包含 ${count} 条预设对话`, - Chat: "对话", - View: "查看", - Edit: "编辑", - Delete: "删除", - DeleteConfirm: "确认删除?", - }, - EditModal: { - Title: (readonly: boolean) => - `编辑预设面具 ${readonly ? "(只读)" : ""}`, - Download: "下载预设", - Clone: "克隆预设", - }, - Config: { - Avatar: "角色头像", - Name: "角色名称", - }, - }, - NewChat: { - Return: "返回", - Skip: "跳过", - Title: "挑选一个面具", - SubTitle: "现在开始,与面具背后的灵魂思维碰撞", - More: "搜索更多", - NotShow: "不再展示", - ConfirmNoShow: "确认禁用?禁用后可以随时在设置中重新启用。", - }, - - UI: { - Confirm: "确认", - Cancel: "取消", - Close: "关闭", - Create: "新建", - Edit: "编辑", - }, -}; - -export default jp; diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirTycoon Online 3 Mod APK for Android - The Ultimate Airline Simulation Game.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirTycoon Online 3 Mod APK for Android - The Ultimate Airline Simulation Game.md deleted file mode 100644 index 58ec32ace6b1da10436488ae92e9f43d57e3bad0..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirTycoon Online 3 Mod APK for Android - The Ultimate Airline Simulation Game.md +++ /dev/null @@ -1,110 +0,0 @@ - -

        AirTycoon Online 3 Mod APK: A Guide for Beginners

        -

        If you are a fan of airline management simulation games, you might have heard of AirTycoon Online 3, a popular online multiplayer game that lets you compete with other players around the world and become the top airliner. But did you know that you can also enjoy this game with a mod APK that gives you unlimited money, coins, and other resources? In this article, we will tell you everything you need to know about AirTycoon Online 3 Mod APK, including what it is, how to download and install it, and some tips and tricks for playing it.

        -

        airtycoon online 3 mod apk


        Download Zip ○○○ https://gohhs.com/2uPqqn



        -

        What is AirTycoon Online 3?

        -

        AirTycoon Online 3 is a turn-based online airline management simulation game developed by TRADEGAME Lab Inc. It is the third installment of the AirTycoon Online series, which was first launched in 2013. The game has been fully upgraded with great quality 3D graphics and improved UI. It also features realistic game background based on historical real time, about 170 real airplane models released as time goes by, and 500 airports all over the world. You can manage your own airline, take over other players' companies, form alliances and code-shares, and communicate with other players. The game is available for both iOS and Android devices.

        -

        Game features

        -

        Some of the game features that make AirTycoon Online 3 stand out are:

        -
          -
        • Turn-based online airline management simulation game: You can play the game at your own pace, without worrying about real-time pressure or interruptions. You can also plan your strategies ahead and execute them when it is your turn.
        • -
        • Gorgeous 3D graphics: The game has stunning 3D graphics that make the game more realistic and immersive. You can see the details of your airplanes, airports, and routes in high quality.
        • -
        • Realistic game background based on historical real time: The game follows the historical timeline of the aviation industry, from 1960 to present. You can experience the changes and challenges that occurred in different eras, such as new airplane models, airport expansions, fuel prices, accidents, etc.
        • -
        • About 170 real airplane models released as time goes by: The game offers a wide range of airplane models that you can purchase or lease for your airline. You can choose from different types, sizes, capacities, ranges, speeds, and costs. Some of the airplane models include B747SP, B707-320B, A319neo, A320neo, A321neo, B787-10, B737-10, E175-E2, E190-E2, E195-E2, Caravelle1, Caravelle10B, Caravelle12, DC-8-61F, DC-8-62F , DC-8-72F.
        • -
        • Manage an airline in 500 airports all over the world: The game covers almost every major airport in the world, from Asia to Europe to America to Africa. You can open new routes between different cities with different business and tour levels. You can also set up your own hubs, branches, fuel tanks, maintenance depots, and VIP lounges to maximize your profits.
        • -
        • Take over other players' companies: The game allows you to take over other players' companies if they are bankrupt or inactive. You can acquire their assets and routes and expand your market share.
        • -
        • Detailed management report and report history: The game provides you with detailed information about your airline's performance and status. You can check your income statement, balance sheet, cash flow statement, route report, fleet report, slot report, etc. You can also view your report history and compare your results with other players in the same period.
        • -
        • Form alliances and code-shares: The game enables you to join or create alliances with other players and share your resources and benefits. You can also make code-share agreements with other airlines and increase your passenger demand.
        • -
        • Communicate with other players: The game has a chat system that allows you to chat with other players in real time. You can exchange tips, opinions, or even trash talk with your rivals.
        • -
        -

        Game modes

        -

        The game has two main game modes: online and offline. In online mode, you can play with other players in real time and compete for the top rank. You can choose from different servers based on your region and language. You can also create your own private room and invite your friends to play with you. In offline mode, you can play solo or with AI players and practice your skills. You can also customize your game settings, such as difficulty level, starting year, number of players, etc.

        -

        airtycoon online 3 unlimited money mod apk
        -airtycoon online 3 hack mod apk download
        -airtycoon online 3 latest version mod apk
        -airtycoon online 3 simulation game mod apk
        -airtycoon online 3 free download mod apk
        -airtycoon online 3 cheats mod apk android
        -airtycoon online 3 premium mod apk unlocked
        -airtycoon online 3 airline management mod apk
        -airtycoon online 3 full mod apk offline
        -airtycoon online 3 cracked mod apk no root
        -airtycoon online 3 best routes mod apk
        -airtycoon online 3 tips and tricks mod apk
        -airtycoon online 3 strategy guide mod apk
        -airtycoon online 3 review mod apk ios
        -airtycoon online 3 gameplay mod apk video
        -airtycoon online 3 wiki mod apk information
        -airtycoon online 3 forum mod apk support
        -airtycoon online 3 update mod apk new features
        -airtycoon online 3 codes mod apk redeem
        -airtycoon online 3 ranking mod apk leaderboard
        -airtycoon online 3 planes mod apk aircrafts
        -airtycoon online 3 airports mod apk destinations
        -airtycoon online 3 routes mod apk flights
        -airtycoon online 3 passengers mod apk demand
        -airtycoon online 3 cargo mod apk freight
        -airtycoon online 3 fuel mod apk cost
        -airtycoon online 3 maintenance mod apk quality
        -airtycoon online 3 staff mod apk salary
        -airtycoon online 3 marketing mod apk promotion
        -airtycoon online 3 alliance mod apk partners
        -airtycoon online 3 competitors mod apk rivals
        -airtycoon online 3 events mod apk challenges
        -airtycoon online 3 disasters mod apk accidents
        -airtycoon online 3 loans mod apk finance
        -airtycoon online 3 stocks mod apk investment
        -airtycoon online 3 research mod apk technology
        -airtycoon online 3 achievements mod apk rewards
        -airtycoon online 3 statistics mod apk data
        -airtycoon online 3 settings mod apk options
        -airtycoon online 3 graphics mod apk quality

        -

        What is a mod APK?

        -

        A mod APK is a modified version of an original APK file that has been altered by a third-party developer to add or remove some features. A mod APK can provide you with some advantages that are not available in the original version, such as unlimited resources, unlocked items, premium features, etc. However, a mod APK can also pose some risks, such as malware infection, account ban, legal issues, etc.

        -

        Benefits of using a mod APK

        -

        Some of the benefits of using a mod APK are:

        -
          -
        • Unlimited money and coins: With a mod APK, you can get unlimited money and coins that you can use to buy or upgrade anything in the game. You don't have to worry about running out of funds or spending real money to get more.
        • -
        • Unlocked airplanes and airports: With a mod APK, you can unlock all the airplanes and airports that are otherwise restricted or require a certain level or achievement to access. You can choose from any airplane model or airport location that you want.
        • -
        • Premium features: With a mod APK, you can enjoy some premium features that are only available for paid users or VIP members. For example, you can get free fuel tanks, maintenance depots, VIP lounges, etc.
        • -
        • No ads: With a mod APK, you can get rid of annoying ads that interrupt your gameplay or consume your data. You can play the game without any distractions or interruptions.
        • -
        -

        Risks of using a mod APK

        -

        Some of the risks of using a mod APK are:

        -
          -
        • Malware infection: With a mod APK, you might download a file that contains malicious code or software that can harm your device or steal your personal information. You might expose your device to viruses, spyware, ransomware, etc.
        • -
        • Account ban: With a mod APK, you might violate the terms and conditions of the game developer or publisher and get banned from playing the game. You might lose your progress, achievements, rankings, etc.
        • -
        • Legal issues: With a mod APK, you might infringe the intellectual property rights of the game developer or publisher and face legal consequences. You might be sued for damages, fines, or even imprisonment.
        • -
        -

        How to download and install AirTycoon Online 3 Mod APK?

        -

        If you want to download and install AirTycoon Online 3 Mod APK on your device, you need to follow these steps:

        -

        Steps to download and install

        -
          -
        1. Find a reliable source: You need to find a trustworthy website that provides the link to download AirTycoon Online 3 Mod APK. You can search on Google or use some recommendations from other users. Make sure to check the reviews and ratings of the website before downloading anything.
        2. -
        3. Download the file: You need to click on the download button and wait for the file to be downloaded on your device. The file size is about 100 MB, so make sure you have enough space and data.
        4. -
        5. Enable unknown sources: You need to enable unknown sources on your device settings to allow the installation of apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        6. -
        7. Install the file: You need to locate the downloaded file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to be completed.
        8. Launch the game: You need to open the game icon on your device and enjoy playing AirTycoon Online 3 Mod APK with unlimited money, coins, and other resources.
        9. -
        -

        Tips and tricks for playing AirTycoon Online 3 Mod APK

        -

        Here are some tips and tricks that can help you play AirTycoon Online 3 Mod APK better and have more fun:

        -
          -
        • Choose your starting year wisely: The game allows you to choose your starting year from 1960 to present. The starting year affects the initial conditions of the game, such as the available airplanes, airports, fuel prices, etc. You can choose a starting year that suits your preference and strategy. For example, if you want to start with a small budget and a few airplanes, you can choose an earlier year. If you want to start with a large budget and many airplanes, you can choose a later year.
        • -
        • Research and develop new airplanes: The game gives you the option to research and develop new airplanes that can improve your performance and competitiveness. You can invest in R&D projects that can unlock new airplane models or upgrade existing ones. You can also customize your airplanes with different engines, seats, liveries, etc.
        • -
        • Optimize your routes and fares: The game requires you to manage your routes and fares efficiently to maximize your profits and customer satisfaction. You can open new routes between different cities with different demand levels and adjust your fares according to the market conditions. You can also use various strategies, such as hub-and-spoke, point-to-point, long-haul, short-haul, etc.
        • -
        • Expand your network and market share: The game challenges you to expand your network and market share by opening new branches, hubs, fuel tanks, maintenance depots, VIP lounges, etc. You can also take over other players' companies or form alliances and code-shares with them.
        • -
        • Monitor your performance and status: The game provides you with detailed reports and statistics that show your performance and status in various aspects. You can check your income statement, balance sheet, cash flow statement, route report, fleet report, slot report, etc. You can also view your report history and compare your results with other players in the same period.
        • -
        -

        Conclusion

        -

        AirTycoon Online 3 is a fun and addictive online airline management simulation game that lets you compete with other players around the world and become the top airliner. With AirTycoon Online 3 Mod APK, you can enjoy the game with unlimited money, coins, and other resources that can give you an edge over your rivals. However, you should also be aware of the risks of using a mod APK, such as malware infection, account ban, legal issues, etc. Therefore, you should download and install AirTycoon Online 3 Mod APK from a reliable source and at your own risk.

        -

        FAQs

        -

        Here are some frequently asked questions about AirTycoon Online 3 Mod APK:

        - - - - - - - -
        QuestionAnswer
        Is AirTycoon Online 3 Mod APK free?Yes, AirTycoon Online 3 Mod APK is free to download and install on your device.
        Is AirTycoon Online 3 Mod APK safe?AirTycoon Online 3 Mod APK is not officially endorsed or supported by the game developer or publisher. Therefore, it may contain some risks, such as malware infection, account ban, legal issues, etc. You should download and install AirTycoon Online 3 Mod APK from a reliable source and at your own risk.
        Can I play AirTycoon Online 3 Mod APK offline?No, AirTycoon Online 3 Mod APK requires an internet connection to play online with other players. However, you can play offline mode with AI players or solo mode without internet connection.
        Can I update AirTycoon Online 3 Mod APK?No, AirTycoon Online 3 Mod APK may not be compatible with the latest version of the original game. Therefore, you may not be able to update AirTycoon Online 3 Mod APK or enjoy the new features of the original game.
        Can I use AirTycoon Online 3 Mod APK on iOS devices?No, AirTycoon Online 3 Mod APK is only available for Android devices. You cannot use it on iOS devices.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Mahjong 3D and Spin the Puzzle Cube to Find All the Matches.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Mahjong 3D and Spin the Puzzle Cube to Find All the Matches.md deleted file mode 100644 index accc2d620f35810f2a6e3839d56ac22136ec0d25..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Mahjong 3D and Spin the Puzzle Cube to Find All the Matches.md +++ /dev/null @@ -1,119 +0,0 @@ - -

        Download Mahjong 3D: A Guide for Beginners

        -

        If you are looking for a fun and challenging game that can stimulate your brain and relax your mind, you might want to try mahjong 3D. Mahjong 3D is a modern version of the classic Chinese game of mahjong, where you have to match pairs of tiles in a three-dimensional cube. In this article, we will explain what mahjong 3D is, how to play it, and how to download it on your device. Whether you are new to mahjong or a seasoned player, you will find something useful and interesting in this guide.

        -

        What is Mahjong 3D?

        -

        Mahjong 3D is a game that combines the elements of traditional mahjong and modern technology. It is based on the same principles as the original game, but with some twists and enhancements. Let's take a look at the history and origin of mahjong, the difference between mahjong and mahjong 3D, and the benefits of playing mahjong 3D.

        -

        download mahjong 3d


        Download Ziphttps://gohhs.com/2uPmP8



        -

        The history and origin of mahjong

        -

        Mahjong is a game that originated in China, possibly as early as the Qing dynasty (1644-1912). It is believed that it was derived from a card game called ma diao, which was played with paper cards that resembled money. Mahjong became popular among the Chinese nobility and upper class, and later spread to other parts of Asia and the world. Mahjong is usually played by four players, who use a set of 144 tiles with various symbols and characters. The goal is to form sets of tiles called melds, such as pairs, triplets, or sequences, and achieve a winning hand.

        -

        The difference between mahjong and mahjong 3D

        -

        Mahjong 3D is a variation of mahjong that uses a three-dimensional layout instead of a flat board. The tiles are arranged in a cube or a pyramid, and can be rotated and viewed from different angles. The rules are similar to the original game, but with some modifications. For example, in mahjong 3D, you can only match tiles that have at least one free side (left or right), whereas in traditional mahjong, you can also match tiles that are free on the top or bottom. Also, in mahjong 3D, you can match any two identical tiles, regardless of their suit or category, whereas in traditional mahjong, you have to follow certain restrictions based on the type of tile.

        -

        download mahjong dimensions: 3d puzzles
        -download tap tiles - mahjong 3d puzzle
        -download mahjong triple 3d - tile match
        -download 3d mahjong mountain free
        -download mahjong solitaire 3d - classic free game
        -download mahjongg dimensions blast: 3d puzzle game
        -download mahjong 3d - pair matching puzzle
        -download mahjong world 3d - all in one majong games
        -download mahjong journey: a tile match adventure quest
        -download mahjong treasure quest: classic gem matching 3d
        -download mahjong solitaire epic: free relaxing games
        -download mahjong titan: free board game
        -download mahjong master 3d - free board game offline
        -download mahjong legend: classic free puzzle game
        -download mahjong city tours: an epic journey and quest
        -download mahjong solitaire: classic free puzzle game
        -download mahjong king: free tile matching game
        -download mahjong magic worlds: journey of the wood elves
        -download mahjong solitaire guru: free board game online
        -download mahjong forest journey: free relaxing puzzle game
        -download mahjong gold: free board game offline & online
        -download mahjong deluxe go: classic tiles game
        -download mahjong solitaire dragon: free board game offline
        -download mahjong oriental: asian classic free games
        -download mahjong village: free board game offline & online
        -download mahjong quest - best free puzzle & board game
        -download mahjong solitaire saga: free relaxing fun game
        -download mahjong solitaire grand harvest - tile matching
        -download mahjong solitaire animal: free board game offline
        -download mahjong flowers: blossom tile matching puzzle game
        -download mah jongg solitaire - relaxing zen puzzle game
        -download real 3d chinese mah-jongg - four winds majongg hd
        -download ultimate shanghai majongg - classic chinese tiles game 3d hd pro edition
        -download ancient egyptian majongg - pyramid tiles match 3d hd pro edition
        -download japanese majongg - tokyo tiles match 3d hd pro edition
        -download halloween majongg - spooky tiles match 3d hd pro edition
        -download christmas majongg - festive tiles match 3d hd pro edition
        -download valentine's day majongg - love tiles match 3d hd pro edition
        -download easter majongg - bunny tiles match 3d hd pro edition
        -download summer majongg - beach tiles match 3d hd pro edition
        -download autumn majongg - fall tiles match 3d hd pro edition
        -download winter majongg - snow tiles match 3d hd pro edition
        -download spring majongg - flower tiles match 3d hd pro edition
        -download fairy tale majongg - fantasy tiles match 3d hd pro edition
        -download animal majongg - safari tiles match 3d hd pro edition
        -download candy majongg - sweet tiles match 3d hd pro edition
        -download fruit majongg - juicy tiles match 3d hd pro edition
        -download space majongg - galaxy tiles match 3d hd pro edition
        -download pirate majongg - treasure tiles match 3d hd pro edition

        -

        The benefits of playing mahjong 3D

        -

        Playing mahjong 3D can have many benefits for your mental and physical health. Some of these benefits are:

        -
          -
        • It improves your memory and concentration skills, as you have to remember the location and shape of the tiles.
        • -
        • It enhances your spatial awareness and visual perception, as you have to manipulate the cube and find matches from different perspectives.
        • -
        • It reduces your stress and anxiety levels, as you have to focus on the game and forget about your worries.
        • -
        • It increases your creativity and problem-solving abilities, as you have to come up with strategies and solutions for each puzzle.
        • -
        • It boosts your mood and self-esteem, as you have to challenge yourself and achieve goals.
        • -
        -

        How to play Mahjong 3D

        -

        Now that you

        on your smartphone or tablet, or you can enjoy it offline with a physical set of tiles. Here are some of the best websites, apps, games, and ways to enjoy mahjong 3D offline.

        -

        The best websites and platforms to play mahjong 3D online

        -

        If you want to play mahjong 3D online, you have plenty of options to choose from. You can find various websites and platforms that offer free mahjong 3D games, with different levels of difficulty, themes, and layouts. Some of the best websites and platforms to play mahjong 3D online are:

        -
          -
        • Mahjong 3D on CrazyGames: This website offers a simple and fun mahjong 3D game, where you have to match tiles in a cube within a time limit. You can use hints, shuffles, and undos to help you if you get stuck. You can also choose from different themes and backgrounds to customize your game.
        • -
        • Mahjong Dimensions on Arkadium Games: This website offers a classic mahjong 3D game, where you have to match tiles in a pyramid within a time limit. You can use multi-match combos, speed match bonuses, and hints to boost your score. You can also play offline, without ads, and with more features if you sign up for Arkadium Advantage.
        • -
        • Mahjong 3D Classic on CrazyGames: This website offers another mahjong 3D game, but with a more traditional style of tiles. You have to match tiles in a cube within a time limit, and you can use hints and shuffles if you need them. You can also choose from different themes and layouts to suit your preference.
        • -
        -

        The best apps and games to download mahjong 3D on your device

        -

        If you want to download mahjong 3D on your device, you have many options as well. You can find various apps and games that offer mahjong 3D puzzles, with different features and modes. Some of the best apps and games to download mahjong 3D on your device are:

        -
          -
        • Mahjong Dimensions: 3D Puzzles on Google Play: This app offers a free mahjong 3D game, where you have to match tiles in a cube within a time limit. You can use multi-match combos, speed match bonuses, hints, shuffles, and undos to help you. You can also play offline, without ads, and with more features if you sign up for Arkadium Advantage.
        • -
        • Mahjong Triple 3D -Tile Match on Google Play: This app offers a challenging mahjong 3D game, where you have to match three tiles instead of two in a cube. You can use hints and shuffles if you get stuck. You can also choose from different themes and backgrounds to customize your game.
        • -
        • Mahjong 3D Matching Puzzle on Google Play: This app offers a relaxing mahjong 3D game, where you have to match tiles in a cube without a time limit. You can use hints if you need them. You can also choose from different levels of difficulty, themes, and layouts to suit your skill and taste.
        • -
        • Mahjong Dimensions - 3D Cube on the App Store: This app offers the same mahjong 3D game as the one on Google Play, but for iOS devices. You can enjoy the same features and modes as the Android version.
        • -
        -

        The best ways to enjoy mahjong 3D offline

        -

        If you want to enjoy mahjong 3D offline, you have some options as well. You can buy or make your own physical set of tiles that are designed for mahjong 3D puzzles. You can also print out some templates or patterns of mahjong 3D layouts that you can use with your tiles. Some of the best ways to enjoy mahjong 3D offline are:

        -
          -
        • Mah Jongg for Windows - The Real Game on Amazon: This product offers a physical set of tiles that are specially made for mahjong 3D puzzles. The tiles are colorful and durable, and come with a storage case and instructions. You can use the tiles to create your own layouts or follow some of the examples provided.
        • -
        • How To Make Your Own Mah Jongg Tiles on Instructables: This tutorial shows you how to make your own mah jongg tiles using cardboard, paper

          paper, glue, and scissors. You can customize the tiles with your own designs and colors. You can also use the templates provided to create different layouts for mahjong 3D puzzles.

        • -
        • Mahjong 3D Layouts on Pinterest: This website offers a collection of images and links to various mahjong 3D layouts that you can use with your tiles. You can find different shapes, sizes, and levels of difficulty for your mahjong 3D puzzles. You can also print out some of the layouts or save them on your device.
        • -
        -

        Conclusion

        -

        Mahjong 3D is a game that can provide you with hours of fun and entertainment. It is a game that can improve your memory, concentration, spatial awareness, visual perception, creativity, problem-solving, mood, and self-esteem. It is a game that can be played online or offline, on your device or with a physical set of tiles. It is a game that can be customized and personalized to suit your preference and skill level. It is a game that can be enjoyed by anyone, regardless of age, gender, or background.

        -

        Summary of the main points

        -

        In this article, we have covered the following topics:

        -
          -
        • What is mahjong 3D and how it differs from traditional mahjong.
        • -
        • How to play mahjong 3D and what are the rules and objectives.
        • -
        • How to download mahjong 3D on your device and what are the best websites, apps, and games.
        • -
        • How to enjoy mahjong 3D offline and what are the best ways to do so.
        • -
        -

        Call to action and invitation to comment

        -

        If you are interested in playing mahjong 3D, we encourage you to try it out for yourself. You can use any of the websites, apps, games, or methods that we have suggested in this article, or you can find your own sources and resources. You can also share this article with your friends and family who might enjoy mahjong 3D as well. We would love to hear from you about your experience with mahjong 3D. Please leave us a comment below and let us know what you think about the game, how you play it, and what tips and tricks you have learned. Thank you for reading and happy gaming!

        -

        Frequently Asked Questions

        -

        Here are some of the most frequently asked questions about mahjong 3D:

        -
          -
        1. Is mahjong 3D hard to play?
        2. -

          No, mahjong 3D is not hard to play. It is a game that can be learned quickly and easily by anyone. The rules are simple and straightforward, and the game is intuitive and user-friendly. You can also adjust the level of difficulty, theme, and layout to suit your skill and taste.

          -
        3. Is mahjong 3D good for your brain?
        4. -

          Yes, mahjong 3D is good for your brain. It is a game that can stimulate your brain and improve your cognitive functions. It can enhance your memory, concentration, spatial awareness, visual perception, creativity, problem-solving, mood, and self-esteem. It can also reduce your stress and anxiety levels by providing you with a relaxing and enjoyable activity.

          -
        5. Is mahjong 3D free to play?
        6. -

          Yes, mahjong 3D is free to play. You can find many websites and platforms that offer free mahjong 3D games online. You can also download many apps and games that offer free mahjong 3D puzzles on your device. You can also enjoy mahjong 3D offline with a physical set of tiles that you can buy or make yourself.

          -
        7. Is mahjong 3D addictive?
        8. -

          Yes, mahjong 3D can be addictive. It is a game that can provide you with hours of fun and entertainment. It is a game that can challenge you and reward you with a sense of achievement and satisfaction. It is a game that can keep you engaged and motivated by offering you different levels of difficulty, themes, and layouts. However, like any other game or activity, it is important to play mahjong 3D in moderation and balance it with other aspects of your life.

          -
        9. Is mahjong 3D related to gambling?
        10. -

          No, mahjong 3D is not related to gambling. Mahjong 3D is a game of skill and strategy, not luck or chance. Mahjong 3D does not involve any money or bets, unlike some forms of traditional mahjong that are played for gambling purposes. Mah

          Mahjong 3D does not involve any money or bets, unlike some forms of traditional mahjong that are played for gambling purposes. Mahjong 3D is a game that can be enjoyed by anyone, regardless of their age, gender, or background.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download The Murder Inc Story and Witness the History of Hip-Hops Most Controversial Label.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download The Murder Inc Story and Witness the History of Hip-Hops Most Controversial Label.md deleted file mode 100644 index 7d7ccd7f1e0b2bc44d98a6a46d0d8befaf936773..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download The Murder Inc Story and Witness the History of Hip-Hops Most Controversial Label.md +++ /dev/null @@ -1,123 +0,0 @@ - -

          Download the Murder Inc Story

          -

          If you are a fan of true crime stories, you might be interested in downloading the Murder Inc Story. But what is the Murder Inc Story, and why should you download it? In this article, we will explain what the Murder Inc Story is, why you should download it, and how to download it from various online sources.

          -

          download the murder inc story


          Download Filehttps://gohhs.com/2uPmXd



          -

          What is the Murder Inc Story?

          -

          The Murder Inc Story can refer to different things related to Murder, Inc., a group of contract killers that operated for the National Crime Syndicate in the 1930s and 1940s. The National Crime Syndicate was a closely connected criminal organization that included the Italian-American Mafia, the Jewish Mob, and other criminal organizations in New York City and elsewhere.

          -

          The Murder Inc Story: A Book by Burton Turkus and Sid Feder

          -

          One of the things that the Murder Inc Story can refer to is a book by Burton Turkus and Sid Feder that tells how the group was exposed and prosecuted. The book, titled Murder, Inc.: The Story of \"The Syndicate\", was published in 1951 and was based on Turkus's experience as an assistant district attorney who led the investigation and prosecution of Murder, Inc. members. The book reveals how Murder, Inc. was composed of Jewish and Italian-American gangsters who accepted murder contracts from mob bosses all around the United States, and how they were responsible for between 400 and 1,000 contract killings. The book also describes how some of the members turned state's evidence and testified against their former associates, leading to many convictions and executions. The book was adapted into a film in 1960, starring Stuart Whitman as Turkus and Peter Falk as Abe Reles, one of the key informants.

          -

          The Murder Inc Story: A TV Series by BET

          -

          Another thing that the Murder Inc Story can refer to is a five-part television documentary series on BET that narrates the rise, fall, and redemption of Murder Inc. Records, a hip-hop label founded by Irv Gotti in 1999. The series features archival footage, interviews, and music from the label's artists, such as Ja Rule, Ashanti, and Lloyd. The series premiered on August 9, 2022.

          -

          The series tells how Irv Gotti, a music producer and DJ, started his own label with the help of Def Jam Records and Universal Music Group, and how he signed some of the hottest artists in the industry at the time. The series also shows how Murder Inc. Records faced challenges such as bad blood with rival rapper 50 Cent, accusations of laundering money for drug kingpin Supreme, and a federal investigation that threatened to destroy the label and its reputation. The series also depicts how Irv Gotti managed to overcome these obstacles and rebuild his career and his life.

          -

          download the murder inc story bet
          -download the murder inc story tv series
          -download the murder inc story irv gotti
          -download the murder inc story episodes
          -download the murder inc story documentary
          -download the murder inc story online
          -download the murder inc story free
          -download the murder inc story full
          -download the murder inc story streaming
          -download the murder inc story season 1
          -download the murder inc story ja rule
          -download the murder inc story ashanti
          -download the murder inc story 2022
          -download the murder inc story hd
          -download the murder inc story watch
          -download the murder inc story directv
          -download the murder inc story fubotv
          -download the murder inc story sling tv
          -download the murder inc story bet+
          -download the murder inc story justwatch
          -download the murder inc story cast
          -download the murder inc story trailer
          -download the murder inc story reviews
          -download the murder inc story ratings
          -download the murder inc story synopsis
          -download the murder inc story history
          -download the murder inc story rise and fall
          -download the murder inc story redemption
          -download the murder inc story untold stories
          -download the murder inc story firsthand accounts
          -download the murder inc story music mogul
          -download the murder inc story hip hop and r&b records
          -download the murder inc story it's murda episode 1
          -download the murder inc story livin' it up episode 2
          -download the murder inc story foolish episode 3
          -download the murder inc story so much pain episode 4
          -download the murder inc story pain is love episode 5
          -download the murder inc story netflix
          -download the murder inc story hulu
          -download the murder inc story amazon prime video
          -download the murder inc story apple tv+
          -download the murder inc story disney+
          -download the murder inc story youtube tv
          -download the murder inc story peacock tv
          -download the murder inc story paramount+
          -download the murder inc story hbo max
          -download the murder inc story showtime anytime
          -download the murder inc story starz play
          -download the murder inc story epix now

          -

          Why Should You Download the Murder Inc Story?

          -

          Now that you know what the Murder Inc Story is, you might be wondering why you should download it. Here are some reasons why you should download the Murder Inc Story:

          -

          Learn About the History of Organized Crime in America

          -

          If you are interested in learning about the history of organized crime in America, downloading the book Murder, Inc.: The Story of \"The Syndicate\" is a good way to do so. The book gives you an insider's perspective on how Murder, Inc., one of the most notorious crime groups in American history, operated and how they were brought to justice. The book also gives you a glimpse into the lives and personalities of some of the most infamous gangsters and killers of all time, such as Albert Anastasia, Louis Buchalter, and Harry Strauss. The book is a classic of true crime literature that has influenced many other works in the genre.

          -

          Discover the Untold Stories of a Hip-Hop Empire

          -

          If you are a fan of hip-hop music and culture, downloading the TV series The Murder Inc Story is a good way to discover the untold stories of one of the most successful and controversial labels in the industry. The series gives you an exclusive access to the behind-the-scenes drama, the creative process, and the personal struggles of the label's founder and artists. The series also showcases some of the hit songs and albums that made Murder Inc. Records a household name in the early 2000s, such as Venni Vetti Vecci, Pain Is Love, and The Last Temptation by Ja Rule, Ashanti and Chapter II by Ashanti, and Street Love by Lloyd. The series is a must-watch for any hip-hop lover who wants to learn more about the history and legacy of Murder Inc. Records.

          -

          Enjoy the Thrill and Drama of True Crime Stories

          -

          If you are looking for some entertainment and excitement, downloading the Murder Inc Story is a good way to enjoy the thrill and drama of true crime stories. Whether you choose to read the book or watch the TV series, you will be captivated by the suspenseful and gripping narratives that will keep you on the edge of your seat. You will also be fascinated by the complex and intriguing characters that will make you feel a range of emotions, from admiration to disgust, from sympathy to fear. The Murder Inc Story is a perfect choice for anyone who loves a good story that is based on real events and people.

          -

          How to Download the Murder Inc Story?

          -

          Now that you know why you should download the Murder Inc Story, you might be wondering how to do it. Here are some ways to download the Murder Inc Story from various online sources:

          -

          Download the Book from Online Sources

          -

          If you want to download the book Murder, Inc.: The Story of \"The Syndicate\", here are some options:

          -

          Amazon Kindle

          -

          One of the easiest and most convenient ways to download the book is through Amazon Kindle, an online service that allows you to buy and read e-books on various devices. You can buy the book for $9.99 on Amazon Kindle Store and download it to your Kindle device or app. You can also read a sample of the book for free before buying it.

          -

          Internet Archive

          -

          Another way to download the book is through Internet Archive, a non-profit digital library that offers free access to millions of books, movies, music, and other media. You can find a scanned copy of the book on Internet Archive and download it in various formats, such as PDF, EPUB, or MOBI. You can also read it online or borrow it for 14 days.

          -

          Stream or Buy the TV Series Online

          -

          If you want to stream or buy the TV series The Murder Inc Story, here are some options:

          -

          BET+

          -

          The best way to stream or buy the TV series is through BET+, an online streaming service that offers exclusive access to original shows, movies, and specials from BET Networks. You can subscribe to BET+ for $9.99 per month or $99.99 per year and watch all five episodes of The Murder Inc Story. You can also watch other shows and movies from BET Networks, such as American Gangster: Trap Queens, Tales, and BET Awards.

          -

          JustWatch

          -

          Another way to stream or buy the TV series is through JustWatch, an online platform that allows you to search for and compare streaming services for movies and shows. You can use JustWatch to find out where you can watch The Murder Inc Story online in your country. You can also filter by price, quality, genre, and rating.

          -

          Conclusion

          -

          The Murder Inc Story is a fascinating and captivating story that covers different aspects of Murder, Inc., a group of contract killers, a hip-hop label, and a true crime story. Whether you choose to download the book or the TV series, you will not regret it. You will learn a lot about the history of organized crime in America, discover the untold stories of a hip-hop empire, and enjoy the thrill and drama of true crime stories. So what are you waiting for? Download the Murder Inc Story today and get ready for an amazing experience.

          -

          FAQs

          -

          Here are some frequently asked questions about the Murder Inc Story:

          -
            -
          • Is the Murder Inc Story based on real events and people?
          • -

            Yes, both the book and the TV series are based on real events and people. The book is based on the investigation and prosecution of Murder, Inc., a group of contract killers that operated for the National Crime Syndicate in the 1930s and 1940s. The TV series is based on the rise, fall, and redemption of Murder Inc. Records, a hip-hop label founded by Irv Gotti in 1999.

            -
          • Where can I find more information about the Murder Inc Story?
          • -

            If you want to find more information about the Murder Inc Story, you can visit the following websites:

            - -
          • What are some other books and shows that are similar to the Murder Inc Story?
          • -

            If you like the Murder Inc Story, you might also like these books and shows that are similar to it:

            -
              -
            • The Five Families: The Rise, Decline, and Resurgence of America's Most Powerful Mafia Empires by Selwyn Raab - A book that chronicles the history of the five major New York City Mafia families from their origins to their present status.
            • -
            • The Defiant Ones by Allen Hughes - A four-part documentary series that explores the partnership and friendship between music legends Dr. Dre and Jimmy Iovine, who co-founded Beats Electronics and Interscope Records.
            • -
            • The Godfather by Mario Puzo - A novel that tells the story of the Corleone family, a powerful Italian-American crime family that operates in New York City in the 1940s and 1950s.
            • -
            • The Get Down by Baz Luhrmann - A musical drama series that depicts the birth of hip-hop culture in the Bronx in the late 1970s.
            • -
            -
          • How can I contact the authors or producers of the Murder Inc Story?
          • -

            If you want to contact the authors or producers of the Murder Inc Story, you can use these methods:

            -
              -
            • Burton Turkus - The author of Murder, Inc.: The Story of \"The Syndicate\". He passed away in 1982, but you can contact his estate through his publisher, Da Capo Press.
            • -
            • Sid Feder - The co-author of Murder, Inc.: The Story of \"The Syndicate\". He passed away in 1977, but you can contact his estate through his publisher, Da Capo Press.
            • -
            • Irv Gotti - The founder of Murder Inc. Records and the executive producer of The Murder Inc Story. You can contact him through his Instagram account (@irvgotti187) or his email address (irvgotti@murderincrecords.com).
            • -
            • BET Networks - The network that produced and aired The Murder Inc Story. You can contact them through their website (www.bet.com), their phone number (1-212-205-3000), or their mailing address (BET Networks, 1540 Broadway, New York, NY 10036).
            • -
            -
          • How can I give feedback or reviews on the Murder Inc Story?
          • -

            If you want to give feedback or reviews on the Murder Inc Story, you can use these platforms:

            -
              -
            • Amazon - You can buy and review the book Murder, Inc.: The Story of "The Syndicate" on Amazon Kindle Store . You can also rate and review other books by the same authors or related to the same topic.
            • -
            • BET+ - You can subscribe and review the TV series The Murder Inc Story on BET+ . You can also rate and review other shows and movies by the same producers or related to the same genre.
            • -
            • Goodreads - You can join and review the book Murder, Inc.: The Story of "The Syndicate" on Goodreads , a social media platform for book lovers. You can also join groups, discussions, and challenges related to the book or the topic.
            • -
            • IMDb - You can register and review the TV series The Murder Inc Story on IMDb , a website that provides information and ratings on movies and shows. You can also browse trivia, quotes, and news related to the series or the topic.
            • -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fffiloni/bark-transformers-example/README.md b/spaces/fffiloni/bark-transformers-example/README.md deleted file mode 100644 index 44f48d6d13c04f48e4c5ac1f113506bfa483dffe..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/bark-transformers-example/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bark Transformers Example -emoji: 🐶🤗 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline/promises.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline/promises.d.ts deleted file mode 100644 index 8f9f06f0b964ee8d214d670dfde9006fd8a18a30..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/readline/promises.d.ts +++ /dev/null @@ -1,143 +0,0 @@ -/** - * The `readline/promise` module provides an API for reading lines of input from a Readable stream one line at a time. - * - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/readline/promises.js) - * @since v17.0.0 - */ -declare module 'readline/promises' { - import { Interface as _Interface, ReadLineOptions, Completer, AsyncCompleter, Direction } from 'node:readline'; - import { Abortable } from 'node:events'; - - class Interface extends _Interface { - /** - * The rl.question() method displays the query by writing it to the output, waits for user input to be provided on input, - * then invokes the callback function passing the provided input as the first argument. - * - * When called, rl.question() will resume the input stream if it has been paused. - * - * If the readlinePromises.Interface was created with output set to null or undefined the query is not written. - * - * If the question is called after rl.close(), it returns a rejected promise. - * - * Example usage: - * - * ```js - * const answer = await rl.question('What is your favorite food? '); - * console.log(`Oh, so your favorite food is ${answer}`); - * ``` - * - * Using an AbortSignal to cancel a question. - * - * ```js - * const signal = AbortSignal.timeout(10_000); - * - * signal.addEventListener('abort', () => { - * console.log('The food question timed out'); - * }, { once: true }); - * - * const answer = await rl.question('What is your favorite food? ', { signal }); - * console.log(`Oh, so your favorite food is ${answer}`); - * ``` - * - * @since v17.0.0 - * @param query A statement or query to write to output, prepended to the prompt. - */ - question(query: string): Promise; - question(query: string, options: Abortable): Promise; - } - - class Readline { - /** - * @param stream A TTY stream. - */ - constructor(stream: NodeJS.WritableStream, options?: { autoCommit?: boolean }); - /** - * The `rl.clearLine()` method adds to the internal list of pending action an action that clears current line of the associated `stream` in a specified direction identified by `dir`. - * Call `rl.commit()` to see the effect of this method, unless `autoCommit: true` was passed to the constructor. - */ - clearLine(dir: Direction): this; - /** - * The `rl.clearScreenDown()` method adds to the internal list of pending action an action that clears the associated `stream` from the current position of the cursor down. - * Call `rl.commit()` to see the effect of this method, unless `autoCommit: true` was passed to the constructor. - */ - clearScreenDown(): this; - /** - * The `rl.commit()` method sends all the pending actions to the associated `stream` and clears the internal list of pending actions. - */ - commit(): Promise; - /** - * The `rl.cursorTo()` method adds to the internal list of pending action an action that moves cursor to the specified position in the associated `stream`. - * Call `rl.commit()` to see the effect of this method, unless `autoCommit: true` was passed to the constructor. - */ - cursorTo(x: number, y?: number): this; - /** - * The `rl.moveCursor()` method adds to the internal list of pending action an action that moves the cursor relative to its current position in the associated `stream`. - * Call `rl.commit()` to see the effect of this method, unless autoCommit: true was passed to the constructor. - */ - moveCursor(dx: number, dy: number): this; - /** - * The `rl.rollback()` method clears the internal list of pending actions without sending it to the associated `stream`. - */ - rollback(): this; - } - - /** - * The `readlinePromises.createInterface()` method creates a new `readlinePromises.Interface` instance. - * - * ```js - * const readlinePromises = require('node:readline/promises'); - * const rl = readlinePromises.createInterface({ - * input: process.stdin, - * output: process.stdout - * }); - * ``` - * - * Once the `readlinePromises.Interface` instance is created, the most common case is to listen for the `'line'` event: - * - * ```js - * rl.on('line', (line) => { - * console.log(`Received: ${line}`); - * }); - * ``` - * - * If `terminal` is `true` for this instance then the `output` stream will get the best compatibility if it defines an `output.columns` property, - * and emits a `'resize'` event on the `output`, if or when the columns ever change (`process.stdout` does this automatically when it is a TTY). - * - * ## Use of the `completer` function - * - * The `completer` function takes the current line entered by the user as an argument, and returns an `Array` with 2 entries: - * - * - An Array with matching entries for the completion. - * - The substring that was used for the matching. - * - * For instance: `[[substr1, substr2, ...], originalsubstring]`. - * - * ```js - * function completer(line) { - * const completions = '.help .error .exit .quit .q'.split(' '); - * const hits = completions.filter((c) => c.startsWith(line)); - * // Show all completions if none found - * return [hits.length ? hits : completions, line]; - * } - * ``` - * - * The `completer` function can also returns a `Promise`, or be asynchronous: - * - * ```js - * async function completer(linePartial) { - * await someAsyncWork(); - * return [['123'], linePartial]; - * } - * ``` - */ - function createInterface( - input: NodeJS.ReadableStream, - output?: NodeJS.WritableStream, - completer?: Completer | AsyncCompleter, - terminal?: boolean, - ): Interface; - function createInterface(options: ReadLineOptions): Interface; -} -declare module 'node:readline/promises' { - export * from 'readline/promises'; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/index.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/index.d.ts deleted file mode 100644 index 673cb869ec0087e17c722b7483ee96e747186cd3..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/index.d.ts +++ /dev/null @@ -1,538 +0,0 @@ -/// -/// -/// -import http = require("http"); -import type { Server as HTTPSServer } from "https"; -import type { Http2SecureServer } from "http2"; -import type { ServerOptions as EngineOptions, AttachOptions, BaseServer } from "engine.io"; -import { ExtendedError, Namespace, ServerReservedEventsMap } from "./namespace"; -import { Adapter, Room, SocketId } from "socket.io-adapter"; -import * as parser from "socket.io-parser"; -import type { Encoder } from "socket.io-parser"; -import { Socket, DisconnectReason } from "./socket"; -import type { BroadcastOperator, RemoteSocket } from "./broadcast-operator"; -import { EventsMap, DefaultEventsMap, EventParams, StrictEventEmitter, EventNames, DecorateAcknowledgementsWithTimeoutAndMultipleResponses, AllButLast, Last, FirstArg, SecondArg } from "./typed-events"; -declare type ParentNspNameMatchFn = (name: string, auth: { - [key: string]: any; -}, fn: (err: Error | null, success: boolean) => void) => void; -declare type AdapterConstructor = typeof Adapter | ((nsp: Namespace) => Adapter); -interface ServerOptions extends EngineOptions, AttachOptions { - /** - * name of the path to capture - * @default "/socket.io" - */ - path: string; - /** - * whether to serve the client files - * @default true - */ - serveClient: boolean; - /** - * the adapter to use - * @default the in-memory adapter (https://github.com/socketio/socket.io-adapter) - */ - adapter: AdapterConstructor; - /** - * the parser to use - * @default the default parser (https://github.com/socketio/socket.io-parser) - */ - parser: any; - /** - * how many ms before a client without namespace is closed - * @default 45000 - */ - connectTimeout: number; - /** - * Whether to enable the recovery of connection state when a client temporarily disconnects. - * - * The connection state includes the missed packets, the rooms the socket was in and the `data` attribute. - */ - connectionStateRecovery: { - /** - * The backup duration of the sessions and the packets. - * - * @default 120000 (2 minutes) - */ - maxDisconnectionDuration?: number; - /** - * Whether to skip middlewares upon successful connection state recovery. - * - * @default true - */ - skipMiddlewares?: boolean; - }; - /** - * Whether to remove child namespaces that have no sockets connected to them - * @default false - */ - cleanupEmptyChildNamespaces: boolean; -} -/** - * Represents a Socket.IO server. - * - * @example - * import { Server } from "socket.io"; - * - * const io = new Server(); - * - * io.on("connection", (socket) => { - * console.log(`socket ${socket.id} connected`); - * - * // send an event to the client - * socket.emit("foo", "bar"); - * - * socket.on("foobar", () => { - * // an event was received from the client - * }); - * - * // upon disconnection - * socket.on("disconnect", (reason) => { - * console.log(`socket ${socket.id} disconnected due to ${reason}`); - * }); - * }); - * - * io.listen(3000); - */ -export declare class Server extends StrictEventEmitter> { - readonly sockets: Namespace; - /** - * A reference to the underlying Engine.IO server. - * - * @example - * const clientsCount = io.engine.clientsCount; - * - */ - engine: BaseServer; - /** @private */ - readonly _parser: typeof parser; - /** @private */ - readonly encoder: Encoder; - /** - * @private - */ - _nsps: Map>; - private parentNsps; - /** - * A subset of the {@link parentNsps} map, only containing {@link ParentNamespace} which are based on a regular - * expression. - * - * @private - */ - private parentNamespacesFromRegExp; - private _adapter?; - private _serveClient; - private readonly opts; - private eio; - private _path; - private clientPathRegex; - /** - * @private - */ - _connectTimeout: number; - private httpServer; - /** - * Server constructor. - * - * @param srv http server, port, or options - * @param [opts] - */ - constructor(opts?: Partial); - constructor(srv?: http.Server | HTTPSServer | Http2SecureServer | number, opts?: Partial); - constructor(srv: undefined | Partial | http.Server | HTTPSServer | Http2SecureServer | number, opts?: Partial); - get _opts(): Partial; - /** - * Sets/gets whether client code is being served. - * - * @param v - whether to serve client code - * @return self when setting or value when getting - */ - serveClient(v: boolean): this; - serveClient(): boolean; - serveClient(v?: boolean): this | boolean; - /** - * Executes the middleware for an incoming namespace not already created on the server. - * - * @param name - name of incoming namespace - * @param auth - the auth parameters - * @param fn - callback - * - * @private - */ - _checkNamespace(name: string, auth: { - [key: string]: any; - }, fn: (nsp: Namespace | false) => void): void; - /** - * Sets the client serving path. - * - * @param {String} v pathname - * @return {Server|String} self when setting or value when getting - */ - path(v: string): this; - path(): string; - path(v?: string): this | string; - /** - * Set the delay after which a client without namespace is closed - * @param v - */ - connectTimeout(v: number): this; - connectTimeout(): number; - connectTimeout(v?: number): this | number; - /** - * Sets the adapter for rooms. - * - * @param v pathname - * @return self when setting or value when getting - */ - adapter(): AdapterConstructor | undefined; - adapter(v: AdapterConstructor): this; - /** - * Attaches socket.io to a server or port. - * - * @param srv - server or port - * @param opts - options passed to engine.io - * @return self - */ - listen(srv: http.Server | HTTPSServer | Http2SecureServer | number, opts?: Partial): this; - /** - * Attaches socket.io to a server or port. - * - * @param srv - server or port - * @param opts - options passed to engine.io - * @return self - */ - attach(srv: http.Server | HTTPSServer | Http2SecureServer | number, opts?: Partial): this; - attachApp(app: any, opts?: Partial): void; - /** - * Initialize engine - * - * @param srv - the server to attach to - * @param opts - options passed to engine.io - * @private - */ - private initEngine; - /** - * Attaches the static file serving. - * - * @param srv http server - * @private - */ - private attachServe; - /** - * Handles a request serving of client source and map - * - * @param req - * @param res - * @private - */ - private serve; - /** - * @param filename - * @param req - * @param res - * @private - */ - private static sendFile; - /** - * Binds socket.io to an engine.io instance. - * - * @param engine engine.io (or compatible) server - * @return self - */ - bind(engine: BaseServer): this; - /** - * Called with each incoming transport connection. - * - * @param {engine.Socket} conn - * @return self - * @private - */ - private onconnection; - /** - * Looks up a namespace. - * - * @example - * // with a simple string - * const myNamespace = io.of("/my-namespace"); - * - * // with a regex - * const dynamicNsp = io.of(/^\/dynamic-\d+$/).on("connection", (socket) => { - * const namespace = socket.nsp; // newNamespace.name === "/dynamic-101" - * - * // broadcast to all clients in the given sub-namespace - * namespace.emit("hello"); - * }); - * - * @param name - nsp name - * @param fn optional, nsp `connection` ev handler - */ - of(name: string | RegExp | ParentNspNameMatchFn, fn?: (socket: Socket) => void): Namespace; - /** - * Closes server connection - * - * @param [fn] optional, called as `fn([err])` on error OR all conns closed - */ - close(fn?: (err?: Error) => void): void; - /** - * Registers a middleware, which is a function that gets executed for every incoming {@link Socket}. - * - * @example - * io.use((socket, next) => { - * // ... - * next(); - * }); - * - * @param fn - the middleware function - */ - use(fn: (socket: Socket, next: (err?: ExtendedError) => void) => void): this; - /** - * Targets a room when emitting. - * - * @example - * // the “foo” event will be broadcast to all connected clients in the “room-101” room - * io.to("room-101").emit("foo", "bar"); - * - * // with an array of rooms (a client will be notified at most once) - * io.to(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * io.to("room-101").to("room-102").emit("foo", "bar"); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - to(room: Room | Room[]): BroadcastOperator; - /** - * Targets a room when emitting. Similar to `to()`, but might feel clearer in some cases: - * - * @example - * // disconnect all clients in the "room-101" room - * io.in("room-101").disconnectSockets(); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - in(room: Room | Room[]): BroadcastOperator; - /** - * Excludes a room when emitting. - * - * @example - * // the "foo" event will be broadcast to all connected clients, except the ones that are in the "room-101" room - * io.except("room-101").emit("foo", "bar"); - * - * // with an array of rooms - * io.except(["room-101", "room-102"]).emit("foo", "bar"); - * - * // with multiple chained calls - * io.except("room-101").except("room-102").emit("foo", "bar"); - * - * @param room - a room, or an array of rooms - * @return a new {@link BroadcastOperator} instance for chaining - */ - except(room: Room | Room[]): BroadcastOperator; - /** - * Emits an event and waits for an acknowledgement from all clients. - * - * @example - * try { - * const responses = await io.timeout(1000).emitWithAck("some-event"); - * console.log(responses); // one response per client - * } catch (e) { - * // some clients did not acknowledge the event in the given delay - * } - * - * @return a Promise that will be fulfilled when all clients have acknowledged the event - */ - emitWithAck>(ev: Ev, ...args: AllButLast>): Promise>>>; - /** - * Sends a `message` event to all clients. - * - * This method mimics the WebSocket.send() method. - * - * @see https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/send - * - * @example - * io.send("hello"); - * - * // this is equivalent to - * io.emit("message", "hello"); - * - * @return self - */ - send(...args: EventParams): this; - /** - * Sends a `message` event to all clients. Alias of {@link send}. - * - * @return self - */ - write(...args: EventParams): this; - /** - * Sends a message to the other Socket.IO servers of the cluster. - * - * @example - * io.serverSideEmit("hello", "world"); - * - * io.on("hello", (arg1) => { - * console.log(arg1); // prints "world" - * }); - * - * // acknowledgements (without binary content) are supported too: - * io.serverSideEmit("ping", (err, responses) => { - * if (err) { - * // some servers did not acknowledge the event in the given delay - * } else { - * console.log(responses); // one response per server (except the current one) - * } - * }); - * - * io.on("ping", (cb) => { - * cb("pong"); - * }); - * - * @param ev - the event name - * @param args - an array of arguments, which may include an acknowledgement callback at the end - */ - serverSideEmit>(ev: Ev, ...args: EventParams, Ev>): boolean; - /** - * Sends a message and expect an acknowledgement from the other Socket.IO servers of the cluster. - * - * @example - * try { - * const responses = await io.serverSideEmitWithAck("ping"); - * console.log(responses); // one response per server (except the current one) - * } catch (e) { - * // some servers did not acknowledge the event in the given delay - * } - * - * @param ev - the event name - * @param args - an array of arguments - * - * @return a Promise that will be fulfilled when all servers have acknowledged the event - */ - serverSideEmitWithAck>(ev: Ev, ...args: AllButLast>): Promise>>[]>; - /** - * Gets a list of socket ids. - * - * @deprecated this method will be removed in the next major release, please use {@link Server#serverSideEmit} or - * {@link Server#fetchSockets} instead. - */ - allSockets(): Promise>; - /** - * Sets the compress flag. - * - * @example - * io.compress(false).emit("hello"); - * - * @param compress - if `true`, compresses the sending data - * @return a new {@link BroadcastOperator} instance for chaining - */ - compress(compress: boolean): BroadcastOperator; - /** - * Sets a modifier for a subsequent event emission that the event data may be lost if the client is not ready to - * receive messages (because of network slowness or other issues, or because they’re connected through long polling - * and is in the middle of a request-response cycle). - * - * @example - * io.volatile.emit("hello"); // the clients may or may not receive it - * - * @return a new {@link BroadcastOperator} instance for chaining - */ - get volatile(): BroadcastOperator; - /** - * Sets a modifier for a subsequent event emission that the event data will only be broadcast to the current node. - * - * @example - * // the “foo” event will be broadcast to all connected clients on this node - * io.local.emit("foo", "bar"); - * - * @return a new {@link BroadcastOperator} instance for chaining - */ - get local(): BroadcastOperator; - /** - * Adds a timeout in milliseconds for the next operation. - * - * @example - * io.timeout(1000).emit("some-event", (err, responses) => { - * if (err) { - * // some clients did not acknowledge the event in the given delay - * } else { - * console.log(responses); // one response per client - * } - * }); - * - * @param timeout - */ - timeout(timeout: number): BroadcastOperator, SocketData>; - /** - * Returns the matching socket instances. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * // return all Socket instances - * const sockets = await io.fetchSockets(); - * - * // return all Socket instances in the "room1" room - * const sockets = await io.in("room1").fetchSockets(); - * - * for (const socket of sockets) { - * console.log(socket.id); - * console.log(socket.handshake); - * console.log(socket.rooms); - * console.log(socket.data); - * - * socket.emit("hello"); - * socket.join("room1"); - * socket.leave("room2"); - * socket.disconnect(); - * } - */ - fetchSockets(): Promise[]>; - /** - * Makes the matching socket instances join the specified rooms. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * - * // make all socket instances join the "room1" room - * io.socketsJoin("room1"); - * - * // make all socket instances in the "room1" room join the "room2" and "room3" rooms - * io.in("room1").socketsJoin(["room2", "room3"]); - * - * @param room - a room, or an array of rooms - */ - socketsJoin(room: Room | Room[]): void; - /** - * Makes the matching socket instances leave the specified rooms. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * // make all socket instances leave the "room1" room - * io.socketsLeave("room1"); - * - * // make all socket instances in the "room1" room leave the "room2" and "room3" rooms - * io.in("room1").socketsLeave(["room2", "room3"]); - * - * @param room - a room, or an array of rooms - */ - socketsLeave(room: Room | Room[]): void; - /** - * Makes the matching socket instances disconnect. - * - * Note: this method also works within a cluster of multiple Socket.IO servers, with a compatible {@link Adapter}. - * - * @example - * // make all socket instances disconnect (the connections might be kept alive for other namespaces) - * io.disconnectSockets(); - * - * // make all socket instances in the "room1" room disconnect and close the underlying connections - * io.in("room1").disconnectSockets(true); - * - * @param close - whether to close the underlying connection - */ - disconnectSockets(close?: boolean): void; -} -export { Socket, DisconnectReason, ServerOptions, Namespace, BroadcastOperator, RemoteSocket, }; -export { Event } from "./socket"; diff --git a/spaces/flax-community/Multilingual-VQA/sections/mlm_intro.md b/spaces/flax-community/Multilingual-VQA/sections/mlm_intro.md deleted file mode 100644 index 5cf45dcec8c8af94ea95d93d18238656581b9f16..0000000000000000000000000000000000000000 --- a/spaces/flax-community/Multilingual-VQA/sections/mlm_intro.md +++ /dev/null @@ -1,5 +0,0 @@ -This demo uses a [CLIP-Vision-Bert model checkpoint](https://huggingface.co/flax-community/clip-vision-bert-cc12m-70k) pre-trained using text-only Masked LM on approximately 10 million image-text pairs taken from the [Conceptual 12M dataset](https://github.com/google-research-datasets/conceptual-12m) translated using [MBart](https://huggingface.co/transformers/model_doc/mbart.html). The translations are performed in the following four languages: English, French, German and Spanish, giving 2.5M examples in each language. - -The model can be used for mask-filling as shown in this demo. The caption can be present or written in any of the following: English, French, German and Spanish. - -For more details, click on `Usage` above or `Article` on the sidebar. 🤗 \ No newline at end of file diff --git a/spaces/florim/MedGPT/autogpt/memory/weaviate.py b/spaces/florim/MedGPT/autogpt/memory/weaviate.py deleted file mode 100644 index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/memory/weaviate.py +++ /dev/null @@ -1,127 +0,0 @@ -import uuid - -import weaviate -from weaviate import Client -from weaviate.embedded import EmbeddedOptions -from weaviate.util import generate_uuid5 - -from autogpt.config import Config -from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding - - -def default_schema(weaviate_index): - return { - "class": weaviate_index, - "properties": [ - { - "name": "raw_text", - "dataType": ["text"], - "description": "original text for the embedding", - } - ], - } - - -class WeaviateMemory(MemoryProviderSingleton): - def __init__(self, cfg): - auth_credentials = self._build_auth_credentials(cfg) - - url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}" - - if cfg.use_weaviate_embedded: - self.client = Client( - embedded_options=EmbeddedOptions( - hostname=cfg.weaviate_host, - port=int(cfg.weaviate_port), - persistence_data_path=cfg.weaviate_embedded_path, - ) - ) - - print( - f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}" - ) - else: - self.client = Client(url, auth_client_secret=auth_credentials) - - self.index = WeaviateMemory.format_classname(cfg.memory_index) - self._create_schema() - - @staticmethod - def format_classname(index): - # weaviate uses capitalised index names - # The python client uses the following code to format - # index names before the corresponding class is created - if len(index) == 1: - return index.capitalize() - return index[0].capitalize() + index[1:] - - def _create_schema(self): - schema = default_schema(self.index) - if not self.client.schema.contains(schema): - self.client.schema.create_class(schema) - - def _build_auth_credentials(self, cfg): - if cfg.weaviate_username and cfg.weaviate_password: - return weaviate.AuthClientPassword( - cfg.weaviate_username, cfg.weaviate_password - ) - if cfg.weaviate_api_key: - return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key) - else: - return None - - def add(self, data): - vector = get_ada_embedding(data) - - doc_uuid = generate_uuid5(data, self.index) - data_object = {"raw_text": data} - - with self.client.batch as batch: - batch.add_data_object( - uuid=doc_uuid, - data_object=data_object, - class_name=self.index, - vector=vector, - ) - - return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}" - - def get(self, data): - return self.get_relevant(data, 1) - - def clear(self): - self.client.schema.delete_all() - - # weaviate does not yet have a neat way to just remove the items in an index - # without removing the entire schema, therefore we need to re-create it - # after a call to delete_all - self._create_schema() - - return "Obliterated" - - def get_relevant(self, data, num_relevant=5): - query_embedding = get_ada_embedding(data) - try: - results = ( - self.client.query.get(self.index, ["raw_text"]) - .with_near_vector({"vector": query_embedding, "certainty": 0.7}) - .with_limit(num_relevant) - .do() - ) - - if len(results["data"]["Get"][self.index]) > 0: - return [ - str(item["raw_text"]) for item in results["data"]["Get"][self.index] - ] - else: - return [] - - except Exception as err: - print(f"Unexpected error {err=}, {type(err)=}") - return [] - - def get_stats(self): - result = self.client.query.aggregate(self.index).with_meta_count().do() - class_data = result["data"]["Aggregate"][self.index] - - return class_data[0]["meta"] if class_data else {} diff --git a/spaces/freddyaboulton/all_demos_3/README.md b/spaces/freddyaboulton/all_demos_3/README.md deleted file mode 100644 index b0128f167b7cb3b79e59221adeab7aa153771cfb..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/all_demos_3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: All Demos 3 -emoji: 🦀 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/givkashi/seam-carving/README.md b/spaces/givkashi/seam-carving/README.md deleted file mode 100644 index 5173c99f0e1d2061a2f195802d3d3f3428e15827..0000000000000000000000000000000000000000 --- a/spaces/givkashi/seam-carving/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Seam Carving -emoji: 🐨 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/godfiry/runwayml-stable-diffusion-v1-5/app.py b/spaces/godfiry/runwayml-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/godfiry/runwayml-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/gradio/HuBERT/docs/make.bat b/spaces/gradio/HuBERT/docs/make.bat deleted file mode 100644 index baa9d02a79266ed17e0841f08a83931d46583393..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/docs/make.bat +++ /dev/null @@ -1,36 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=python -msphinx -) -set SOURCEDIR=. -set BUILDDIR=_build -set SPHINXPROJ=fairseq - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The Sphinx module was not found. Make sure you have Sphinx installed, - echo.then set the SPHINXBUILD environment variable to point to the full - echo.path of the 'sphinx-build' executable. Alternatively you may add the - echo.Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% - -:end -popd diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Search/index.ts b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Search/index.ts deleted file mode 100644 index 85bb434b226e3b40d04bc31093a62537be728fc1..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Search/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './Search'; diff --git a/spaces/gstaff/gif-reverser/app.py b/spaces/gstaff/gif-reverser/app.py deleted file mode 100644 index 6a796295e5db8b0a3f97a81caf2fd3b515b548d0..0000000000000000000000000000000000000000 --- a/spaces/gstaff/gif-reverser/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import gradio as gr -from PIL import Image -import tempfile - - -def reverse(input_path, frames_per_second): - # Open the GIF file - gif = Image.open(input_path) - - # Get the number of frames in the GIF - num_frames = gif.n_frames - - # Create a list to hold the reversed frames - reversed_frames = [] - - # Iterate through the frames in reverse order - for frame_number in range(num_frames, 0, -1): - gif.seek(frame_number - 1) - frame = gif.copy() - reversed_frames.append(frame) - - # Interesting blur effect on gifs with transparent background - if 'duration' not in gif.info: - # Default is 8 frames per second from AnimatedDiff - duration = frames_per_second * num_frames - gif.info['duration'] = duration - - # Save the reversed frames as a new GIF - with tempfile.NamedTemporaryFile(suffix=".gif", delete=False) as temp_file: - temp_filename = temp_file.name - reversed_frames[0].save(temp_filename, save_all=True, append_images=reversed_frames[1:], - duration=gif.info['duration'], loop=0) - return temp_filename - - -def reverse_gifs(input_paths, frames_per_second): - if input_paths is None: - return None, None - input_paths = [f.name for f in input_paths] - - temp_filenames = [] - for input_path in input_paths: - temp_filenames.append(reverse(input_path, frames_per_second)) - - return input_paths, temp_filenames - - -with gr.Blocks(theme='gstaff/sketch') as demo: - gr.Markdown(value='# GIF Reversing Tool') - with gr.Row(): - with gr.Column(scale=1): - gr.Markdown('## Load animated gifs to reverse') - image_in = gr.Files(label="Input Gif Files", type='file', file_types=['.gif']) - frame_rate = gr.Number(label="Frames per Second to use (if not in gif metadata)", value=8, minimum=1, - maximum=360, step=1, interactive=True) - image_display = gr.Gallery(label="Input Images", interactive=False) - with gr.Column(scale=1): - gr.Markdown('## View the reversed gif') - image_out = gr.Gallery(label="Reversed Gif Images") - clear_button = gr.ClearButton(components=[image_in]) - - image_in.change(reverse_gifs, [image_in, frame_rate], [image_display, image_out]) - gr.Examples(examples=[[['./example.gif'], 8]], - inputs=[image_in, frame_rate], outputs=[image_display, image_out], fn=reverse_gifs, cache_examples=True) - - with gr.Accordion('Developer Notes:', open=False): - gr.Markdown('This gif reverser is a simple utility I made for myself.\n\n' - 'The default of 8 frames per second works for the default settings of [AnimateDiff](https://github.com/continue-revolution/sd-webui-animatediff).\n\n' - 'Future enhancements could be to auto-determine the framerate when otherwise not available in the gif metadata.') - -if __name__ == '__main__': - demo.launch() diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/conv2d_gradfix.py b/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/conv2d_gradfix.py deleted file mode 100644 index e95e10d0b1d0315a63a76446fd4c5c293c8bbc6d..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/torch_utils/ops/conv2d_gradfix.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.conv2d` that supports -arbitrarily high order gradients with zero performance penalty.""" - -import warnings -import contextlib -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. -weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights. - -@contextlib.contextmanager -def no_weight_gradients(): - global weight_gradients_disabled - old = weight_gradients_disabled - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -#---------------------------------------------------------------------------- - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d().') - return False - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -#---------------------------------------------------------------------------- - -_conv2d_gradfix_cache = dict() - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups) - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - if not transpose: - output = torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - else: # transpose - output = torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - ctx.save_for_backward(input, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output, weight, None) - assert grad_input.shape == input.shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - op = torch._C._jit_get_operation('aten::cudnn_convolution_backward_weight' if not transpose else 'aten::cudnn_convolution_transpose_backward_weight') - flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - grad_weight = op(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - assert grad_weight.shape == weight_shape - ctx.save_for_backward(grad_output, input) - return grad_weight - - @staticmethod - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output.shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding(input_shape=input.shape, output_shape=grad_output.shape) - grad2_input = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs).apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input.shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -#---------------------------------------------------------------------------- diff --git a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/FB_cse_mask_face.py b/spaces/haakohu/deep_privacy2_face/configs/anonymizers/FB_cse_mask_face.py deleted file mode 100644 index d411d66cc051f6b4c0d907551735e8f661cf17f1..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/configs/anonymizers/FB_cse_mask_face.py +++ /dev/null @@ -1,29 +0,0 @@ -from dp2.anonymizer import Anonymizer -from dp2.detection.cse_mask_face_detector import CSeMaskFaceDetector -from ..defaults import common -from tops.config import LazyCall as L - -detector = L(CSeMaskFaceDetector)( - mask_rcnn_cfg=dict(), - face_detector_cfg=dict(), - face_post_process_cfg=dict(target_imsize=(256, 256), fdf128_expand=False), - cse_cfg=dict(), - cse_post_process_cfg=dict( - target_imsize=(288, 160), - exp_bbox_cfg=dict(percentage_background=0.3, axis_minimum_expansion=.1), - exp_bbox_filter=dict(minimum_area=32*32, min_bbox_ratio_inside=0, aspect_ratio_range=[0, 99999]), - iou_combine_threshold=0.4, - dilation_percentage=0.02, - normalize_embedding=False - ), - score_threshold=0.3, - cache_directory=common.output_dir.joinpath("cse_mask_face_detection_cache") -) - -anonymizer = L(Anonymizer)( - detector="${detector}", - face_G_cfg="configs/fdf/stylegan.py", - person_G_cfg="configs/fdh/styleganL_nocse.py", - cse_person_G_cfg="configs/fdh/styleganL.py", - car_G_cfg="configs/generators/dummy/pixelation8.py" -) diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/execute_code.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/execute_code.py deleted file mode 100644 index 11266f852727f2f8aedbc995b1e504a17acbfb77..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/execute_code.py +++ /dev/null @@ -1,158 +0,0 @@ -"""Execute code in a Docker container""" -import os -import subprocess - -import docker -from docker.errors import ImageNotFound - -from autogpt.workspace import WORKSPACE_PATH, path_in_workspace - - -def execute_python_file(file: str) -> str: - """Execute a Python file in a Docker container and return the output - - Args: - file (str): The name of the file to execute - - Returns: - str: The output of the file - """ - - print(f"Executing file '{file}' in workspace '{WORKSPACE_PATH}'") - - if not file.endswith(".py"): - return "Error: Invalid file type. Only .py files are allowed." - - file_path = path_in_workspace(file) - - if not os.path.isfile(file_path): - return f"Error: File '{file}' does not exist." - - if we_are_running_in_a_docker_container(): - result = subprocess.run( - f"python {file_path}", capture_output=True, encoding="utf8", shell=True - ) - if result.returncode == 0: - return result.stdout - else: - return f"Error: {result.stderr}" - - try: - client = docker.from_env() - - # You can replace this with the desired Python image/version - # You can find available Python images on Docker Hub: - # https://hub.docker.com/_/python - image_name = "python:3-alpine" - try: - client.images.get(image_name) - print(f"Image '{image_name}' found locally") - except ImageNotFound: - print(f"Image '{image_name}' not found locally, pulling from Docker Hub") - # Use the low-level API to stream the pull response - low_level_client = docker.APIClient() - for line in low_level_client.pull(image_name, stream=True, decode=True): - # Print the status and progress, if available - status = line.get("status") - progress = line.get("progress") - if status and progress: - print(f"{status}: {progress}") - elif status: - print(status) - - container = client.containers.run( - image_name, - f"python {file}", - volumes={ - os.path.abspath(WORKSPACE_PATH): { - "bind": "/workspace", - "mode": "ro", - } - }, - working_dir="/workspace", - stderr=True, - stdout=True, - detach=True, - ) - - container.wait() - logs = container.logs().decode("utf-8") - container.remove() - - # print(f"Execution complete. Output: {output}") - # print(f"Logs: {logs}") - - return logs - - except docker.errors.DockerException as e: - print( - "Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/" - ) - return f"Error: {str(e)}" - - except Exception as e: - return f"Error: {str(e)}" - - -def execute_shell(command_line: str) -> str: - """Execute a shell command and return the output - - Args: - command_line (str): The command line to execute - - Returns: - str: The output of the command - """ - current_dir = os.getcwd() - # Change dir into workspace if necessary - if str(WORKSPACE_PATH) not in current_dir: - os.chdir(WORKSPACE_PATH) - - print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'") - - result = subprocess.run(command_line, capture_output=True, shell=True) - output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}" - - # Change back to whatever the prior working dir was - - os.chdir(current_dir) - - return output - - -def execute_shell_popen(command_line) -> str: - """Execute a shell command with Popen and returns an english description - of the event and the process id - - Args: - command_line (str): The command line to execute - - Returns: - str: Description of the fact that the process started and its id - """ - current_dir = os.getcwd() - # Change dir into workspace if necessary - if str(WORKSPACE_PATH) not in current_dir: - os.chdir(WORKSPACE_PATH) - - print(f"Executing command '{command_line}' in working directory '{os.getcwd()}'") - - do_not_show_output = subprocess.DEVNULL - process = subprocess.Popen( - command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output - ) - - # Change back to whatever the prior working dir was - - os.chdir(current_dir) - - return f"Subprocess started with PID:'{str(process.pid)}'" - - -def we_are_running_in_a_docker_container() -> bool: - """Check if we are running in a Docker container - - Returns: - bool: True if we are running in a Docker container, False otherwise - """ - return os.path.exists("/.dockerenv") diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/README.md b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/README.md deleted file mode 100644 index 9765b24a730b77556104187ac3ef5439ab0859fd..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Utility functions - -This folder contain utility functions that are not used in the -core library, but are useful for building models or training -code using the config system. diff --git a/spaces/harshitv804/LawGPT/app.py b/spaces/harshitv804/LawGPT/app.py deleted file mode 100644 index 40a21c8307a938c743e881dea9921c3f9836e0ce..0000000000000000000000000000000000000000 --- a/spaces/harshitv804/LawGPT/app.py +++ /dev/null @@ -1,66 +0,0 @@ -from langchain.vectorstores import Chroma -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -from transformers import pipeline -import torch -from langchain.llms import HuggingFacePipeline -from langchain.embeddings import SentenceTransformerEmbeddings -from langchain.chains import RetrievalQA -import gradio as gr - -def chat(chat_history, user_input): - - bot_response = qa_chain({"query": user_input}) - bot_response = bot_response['result'] - response = "" - for letter in ''.join(bot_response): - response += letter + "" - yield chat_history + [(user_input, response)] - -checkpoint = "MBZUAI/LaMini-Flan-T5-783M" -tokenizer = AutoTokenizer.from_pretrained(checkpoint) -base_model = AutoModelForSeq2SeqLM.from_pretrained( - checkpoint, - device_map="auto", - torch_dtype = torch.float32) - -embeddings = SentenceTransformerEmbeddings(model_name="sentence-transformers/multi-qa-mpnet-base-dot-v1") - -db = Chroma(persist_directory="ipc_vector_data", embedding_function=embeddings) - -pipe = pipeline( - 'text2text-generation', - model = base_model, - tokenizer = tokenizer, - max_length = 512, - do_sample = True, - temperature = 0.3, - top_p= 0.95 -) -local_llm = HuggingFacePipeline(pipeline=pipe) - -qa_chain = RetrievalQA.from_chain_type(llm=local_llm, - chain_type='stuff', - retriever=db.as_retriever(search_type="similarity", search_kwargs={"k":2}), - return_source_documents=True, - ) - -with gr.Blocks() as gradioUI: - - gr.Image('lawgptlogo.png') - with gr.Row(): - chatbot = gr.Chatbot() - with gr.Row(): - input_query = gr.TextArea(label='Input',show_copy_button=True) - with gr.Row(): - with gr.Column(): - submit_btn = gr.Button("Submit", variant="primary") - with gr.Column(): - clear_input_btn = gr.Button("Clear Input") - with gr.Column(): - clear_chat_btn = gr.Button("Clear Chat") - - submit_btn.click(chat, [chatbot, input_query], chatbot) - submit_btn.click(lambda: gr.update(value=""), None, input_query, queue=False) - clear_input_btn.click(lambda: None, None, input_query, queue=False) - clear_chat_btn.click(lambda: None, None, chatbot, queue=False) -gradioUI.queue().launch() \ No newline at end of file diff --git a/spaces/haseena97/malaysian_dessert/app.py b/spaces/haseena97/malaysian_dessert/app.py deleted file mode 100644 index de0820b6f452905ea1ac5105724f96a682dd29bc..0000000000000000000000000000000000000000 --- a/spaces/haseena97/malaysian_dessert/app.py +++ /dev/null @@ -1,34 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -learn = load_learner('kuih02.pkl') -with open('kuih02.json', 'r') as f: - data = json.load(f) -def ingredient(ball): - case = '' - for val in data: - if val['Name'] == ball: - case = case + val['Ingredient'] - return case -def calories(ball): - box = '' - for val in data: - if val['Name'] == ball: - box = box + val['Calories per piece'] - return box -def category(ball): - cat = '' - for val in data: - if val['Name'] == ball: - cat = cat + val['Category'] - return cat -def classify_image (img): - ball,_,probs = learn.predict(img) - return ('Name:{0}\nCategory:{1}\nIngredients:{2}\nCalories:{3}'.format(ball,category(ball),ingredient(ball),calories(ball))) -# create gradio interface -title = 'Lets Find Out!' -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['dodol.jpg','bingka.jpg','Cucur badak.jpg'] - -intf = gr.Interface(fn = classify_image, title = title ,inputs = image, outputs = ['text'], examples = examples) -intf.launch(debug=False) \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h deleted file mode 100644 index a99c8ebddaa4936e26437b42d62e2b8355c655aa..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h +++ /dev/null @@ -1,115 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#pragma once -#include - -namespace detectron2 { - -at::Tensor ROIAlignRotated_forward_cpu( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cpu( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); - -#ifdef WITH_CUDA -at::Tensor ROIAlignRotated_forward_cuda( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio); - -at::Tensor ROIAlignRotated_backward_cuda( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio); -#endif - -// Interface for Python -inline at::Tensor ROIAlignRotated_forward( - const at::Tensor& input, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int sampling_ratio) { - if (input.is_cuda()) { -#ifdef WITH_CUDA - return ROIAlignRotated_forward_cuda( - input, - rois, - spatial_scale, - pooled_height, - pooled_width, - sampling_ratio); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - return ROIAlignRotated_forward_cpu( - input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio); -} - -inline at::Tensor ROIAlignRotated_backward( - const at::Tensor& grad, - const at::Tensor& rois, - const float spatial_scale, - const int pooled_height, - const int pooled_width, - const int batch_size, - const int channels, - const int height, - const int width, - const int sampling_ratio) { - if (grad.is_cuda()) { -#ifdef WITH_CUDA - return ROIAlignRotated_backward_cuda( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - return ROIAlignRotated_backward_cpu( - grad, - rois, - spatial_scale, - pooled_height, - pooled_width, - batch_size, - channels, - height, - width, - sampling_ratio); -} - -} // namespace detectron2 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/config.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/config.py deleted file mode 100644 index 74f63672bba7cd25679054b19ff87254a0e24974..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/config.py +++ /dev/null @@ -1,48 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from detectron2.config import CfgNode as CN - - -def add_pointrend_config(cfg): - """ - Add config for PointRend. - """ - # We retry random cropping until no single category in semantic segmentation GT occupies more - # than `SINGLE_CATEGORY_MAX_AREA` part of the crop. - cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA = 1.0 - # Color augmentatition from SSD paper for semantic segmentation model during training. - cfg.INPUT.COLOR_AUG_SSD = False - - # Names of the input feature maps to be used by a coarse mask head. - cfg.MODEL.ROI_MASK_HEAD.IN_FEATURES = ("p2",) - cfg.MODEL.ROI_MASK_HEAD.FC_DIM = 1024 - cfg.MODEL.ROI_MASK_HEAD.NUM_FC = 2 - # The side size of a coarse mask head prediction. - cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION = 7 - # True if point head is used. - cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON = False - - cfg.MODEL.POINT_HEAD = CN() - cfg.MODEL.POINT_HEAD.NAME = "StandardPointHead" - cfg.MODEL.POINT_HEAD.NUM_CLASSES = 80 - # Names of the input feature maps to be used by a mask point head. - cfg.MODEL.POINT_HEAD.IN_FEATURES = ("p2",) - # Number of points sampled during training for a mask point head. - cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS = 14 * 14 - # Oversampling parameter for PointRend point sampling during training. Parameter `k` in the - # original paper. - cfg.MODEL.POINT_HEAD.OVERSAMPLE_RATIO = 3 - # Importance sampling parameter for PointRend point sampling during training. Parametr `beta` in - # the original paper. - cfg.MODEL.POINT_HEAD.IMPORTANCE_SAMPLE_RATIO = 0.75 - # Number of subdivision steps during inference. - cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS = 5 - # Maximum number of points selected at each subdivision step (N). - cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS = 28 * 28 - cfg.MODEL.POINT_HEAD.FC_DIM = 256 - cfg.MODEL.POINT_HEAD.NUM_FC = 3 - cfg.MODEL.POINT_HEAD.CLS_AGNOSTIC_MASK = False - # If True, then coarse prediction features are used as inout for each layer in PointRend's MLP. - cfg.MODEL.POINT_HEAD.COARSE_PRED_EACH_LAYER = True - cfg.MODEL.POINT_HEAD.COARSE_SEM_SEG_HEAD_NAME = "SemSegFPNHead" diff --git a/spaces/hishamomran/explicit_text_classifier/README.md b/spaces/hishamomran/explicit_text_classifier/README.md deleted file mode 100644 index f98fe09ef4ed51c33b808761facb705814143d6b..0000000000000000000000000000000000000000 --- a/spaces/hishamomran/explicit_text_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Michellejieli-inappropriate Text Classifier -emoji: 🚀 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: suzyanil/explicit_text_classifier_app ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/training_example_Hippocampus.md b/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/training_example_Hippocampus.md deleted file mode 100644 index 5bb7f19d01a525b67f93e749713380d9cf0e1c9a..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/documentation/training_example_Hippocampus.md +++ /dev/null @@ -1,40 +0,0 @@ -# Example: 3D U-Net training on the Hippocampus dataset - -This is a step-by-step example on how to run a 3D full resolution Training with the Hippocampus dataset from the -Medical Segmentation Decathlon. - -1) Install nnU-Net by following the instructions [here](../readme.md#installation). Make sure to set all relevant paths, -also see [here](setting_up_paths.md). This step is necessary so that nnU-Net knows where to store raw data, -preprocessed data and trained models. -2) Download the Hippocampus dataset of the Medical Segmentation Decathlon from -[here](https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2). Then extract the archive to a -destination of your choice. -3) Decathlon data come as 4D niftis. This is not compatible with nnU-Net (see dataset format specified -[here](dataset_conversion.md)). Convert the Hippocampus dataset into the correct format with - - ```bash - nnUNet_convert_decathlon_task -i /xxx/Task04_Hippocampus - ``` - - Note that `Task04_Hippocampus` must be the folder that has the three 'imagesTr', 'labelsTr', 'imagesTs' subfolders! - The converted dataset can be found in $nnUNet_raw_data_base/nnUNet_raw_data ($nnUNet_raw_data_base is the folder for - raw data that you specified during installation) -4) You can now run nnU-Nets pipeline configuration (and the preprocessing) with the following line: - ```bash - nnUNet_plan_and_preprocess -t 4 - ``` - Where 4 refers to the task ID of the Hippocampus dataset. -5) Now you can already start network training. This is how you train a 3d full resoltion U-Net on the Hippocampus dataset: - ```bash - nnUNet_train 3d_fullres nnUNetTrainerV2 4 0 - ``` - nnU-Net per default requires all trainings as 5-fold cross validation. The command above will run only the training for the - first fold (fold 0). 4 is the task identifier of the hippocampus dataset. Training one fold should take about 9 - hours on a modern GPU. - -This tutorial is only intended to demonstrate how easy it is to get nnU-Net running. You do not need to finish the -network training - pretrained models for the hippocampus task are available (see [here](../readme.md#run-inference)). - -The only prerequisite for running nnU-Net on your custom dataset is to bring it into a structured, nnU-Net compatible -format. nnU-Net will take care of the rest. See [here](dataset_conversion.md) for instructions on how to convert -datasets into nnU-Net compatible format. diff --git a/spaces/hugggof/vampnet/README.md b/spaces/hugggof/vampnet/README.md deleted file mode 100644 index 9f63c43e04e5c6c4bf9d1ec12276636ee77a075d..0000000000000000000000000000000000000000 --- a/spaces/hugggof/vampnet/README.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: "VampNet: Music Generation with Masked Transformers" -emoji: 🤖 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -python_version: 3.9 ---- - -# VampNet - -This repository contains recipes for training generative music models on top of the Descript Audio Codec. - -## try `unloop` -you can try vampnet in a co-creative looper called unloop. see this link: https://github.com/hugofloresgarcia/unloop - -# Setting up - -**Requires Python 3.9**. - -you'll need a Python 3.9 environment to run VampNet. This is due to a [known issue with madmom](https://github.com/hugofloresgarcia/vampnet/issues/15). - -(for example, using conda) -```bash -conda create -n vampnet python=3.9 -conda activate vampnet -``` - - -install VampNet - -```bash -git clone https://github.com/hugofloresgarcia/vampnet.git -pip install -e ./vampnet -``` - -## A note on argbind -This repository relies on [argbind](https://github.com/pseeth/argbind) to manage CLIs and config files. -Config files are stored in the `conf/` folder. - -## Getting the Pretrained Models - -### Licensing for Pretrained Models: -The weights for the models are licensed [`CC BY-NC-SA 4.0`](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ml). Likewise, any VampNet models fine-tuned on the pretrained models are also licensed [`CC BY-NC-SA 4.0`](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ml). - -Download the pretrained models from [this link](https://zenodo.org/record/8136629). Then, extract the models to the `models/` folder. - - -# Usage - -## Launching the Gradio Interface -You can launch a gradio UI to play with vampnet. - -```bash -python app.py --args.load conf/interface.yml --Interface.device cuda -``` - -# Training / Fine-tuning - -## Training a model - -To train a model, run the following script: - -```bash -python scripts/exp/train.py --args.load conf/vampnet.yml --save_path /path/to/checkpoints -``` - -You can edit `conf/vampnet.yml` to change the dataset paths or any training hyperparameters. - -For coarse2fine models, you can use `conf/c2f.yml` as a starting configuration. - -See `python scripts/exp/train.py -h` for a list of options. - -## Fine-tuning -To fine-tune a model, use the script in `scripts/exp/fine_tune.py` to generate 3 configuration files: `c2f.yml`, `coarse.yml`, and `interface.yml`. -The first two are used to fine-tune the coarse and fine models, respectively. The last one is used to launch the gradio interface. - -```bash -python scripts/exp/fine_tune.py "/path/to/audio1.mp3 /path/to/audio2/ /path/to/audio3.wav" -``` - -This will create a folder under `conf//` with the 3 configuration files. - -The save_paths will be set to `runs//coarse` and `runs//c2f`. - -launch the coarse job: -```bash -python scripts/exp/train.py --args.load conf//coarse.yml -``` - -this will save the coarse model to `runs//coarse/ckpt/best/`. - -launch the c2f job: -```bash -python scripts/exp/train.py --args.load conf//c2f.yml -``` - -launch the interface: -```bash -python app.py --args.load conf/generated//interface.yml -``` - - diff --git a/spaces/hysts/ControlNet-with-Anything-v4/style.css b/spaces/hysts/ControlNet-with-Anything-v4/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/hysts/ControlNet-with-Anything-v4/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py deleted file mode 100644 index a44a5d771e17ecbeffe3437f3500e9d0c9dcc105..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc03_32gpu_r50.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.3 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.4 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hzy123/bingo/src/components/ui/dialog.tsx b/spaces/hzy123/bingo/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
            - {children} -
            -
            -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
            -) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/hzy123/bingo/src/components/ui/icons.tsx b/spaces/hzy123/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/iamstolas/STOLAS/src/components/chat-list.tsx b/spaces/iamstolas/STOLAS/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
            - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
            - ) -} diff --git a/spaces/inamXcontru/PoeticTTS/Bubble Gum Download In Tamil Torrent A Must-See Film for All Ages.md b/spaces/inamXcontru/PoeticTTS/Bubble Gum Download In Tamil Torrent A Must-See Film for All Ages.md deleted file mode 100644 index 411d61b8d2d133b88d0a263d118163eb41383771..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bubble Gum Download In Tamil Torrent A Must-See Film for All Ages.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Bubble Gum Download In Tamil Torrent


            Download Filehttps://gohhs.com/2uz497



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Isplever Classic 15 Crack REPACK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Isplever Classic 15 Crack REPACK.md deleted file mode 100644 index a265a2ead85a7eefc046d9e78e9944ab7e607f3c..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Isplever Classic 15 Crack REPACK.md +++ /dev/null @@ -1,10 +0,0 @@ - -

            Videos such as Watch full movies online for free. Most ispLever GUI design tools can analyze designs such as the asynchronous reset, CML, or CMOS for design topologies with advanced timing and power analysis. ispLever has been downloaded over 136,891 times.

            -

            The package includes ispLever 7.1 Lattice Semiconductor 2008 is designed to support the Lattice's FPGA and ASIC families. It has extended the amount of functionality and added new synthesis, tools, and support features.

            -

            Isplever Classic 15 Crack


            Download File ✏ ✏ ✏ https://urlin.us/2uEy04



            -

            3GPP in the browser support. Free Download Video Bokep Paris Hilton 3gp -- ellleig, 17:41:03 02/01/14 Sat... ispLever 7.1 Lattice Semiconductor 2008 for the Lattice's FPGA and ASIC families. It has extended the amount of functionality and added new synthesis, tools, and support features.

            -

            Rise Of Nation Extended Edition Download [Top rated] plaku e plaka te nana naile video kllokot. Duke Project 3 (1990) EK 46012.zip Cool Resizer 2012 crack [Top rated] plaku e plaka te nana naile video kllokot. pliku in.zip. The package includes ispLEVER 7.1 Lattice Semiconductor 2008 is designed to support the Lattice's FPGA and ASIC families. It has extended the amount of functionality and added new synthesis, tools, and support features.

            -

            -

            The intuitive UUi environment makes it easy to browse through parameters, change settings and save your settings.
            ispLEVER for Windows, UNIX and Linux features the industry-leading Synplify Pro VHDL and Verilog synthesis tool from Synposys, including tools like HDL Analyst for powerful Verilog and VHDL view/debug. ispLEVER for Windows also includes the very high performance Aldec Active-HDL Lattice Edition timing and functional simulator, which yields fast simulation simulation results and includes mixed language support.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Autodesk Xforce 2011 Keygen [WORK] 16.md b/spaces/inreVtussa/clothingai/Examples/Autodesk Xforce 2011 Keygen [WORK] 16.md deleted file mode 100644 index 9fe6da96449007065a8627f59cf2ea059e207e30..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Autodesk Xforce 2011 Keygen [WORK] 16.md +++ /dev/null @@ -1,46 +0,0 @@ -
            -

            How to Activate Autodesk Products with Xforce Keygen 2011

            -

            Xforce Keygen 2011 is a crack tool that can generate activation codes for various Autodesk products, such as AutoCAD, Maya, Inventor, Revit, and more. With Xforce Keygen 2011, you can enjoy the full features of Autodesk software without paying for a license. In this article, we will show you how to use Xforce Keygen 2011 to activate your Autodesk products.

            -

            Autodesk Xforce 2011 Keygen 16


            Download Filehttps://tiurll.com/2uCjP4



            -

            Step 1: Download and Install Xforce Keygen 2011

            -

            The first step is to download and install Xforce Keygen 2011 from a reliable source. You can find the download links for Xforce Keygen 2011 at [^2^] or [^1^]. Make sure you choose the correct version for your operating system (32-bit or 64-bit). After downloading, extract the zip file and run the setup file to install Xforce Keygen 2011 on your computer.

            -

            Step 2: Finish the Installation and Restart Autodesk Product

            -

            The next step is to finish the installation of your Autodesk product and restart it. If you have not installed your Autodesk product yet, you can download it from the official website or use a DVD. Follow the instructions on the screen to complete the installation process. When prompted for a serial number, you can enter any of these:

            -
              -
            • 666-69696969
            • -
            • 667-98989898
            • -
            • 400-45454545
            • -
            • 066-66666666
            • -
            -

            Then, enter any of these product keys according to your Autodesk product:

            -

            - - - - - - - - - - - - - - - - - -
            Product NameProduct Key
            AutoCAD 2011001C1
            AutoCAD Architecture 2011185C1
            AutoCAD Civil 3D 2011237C1
            AutoCAD Electrical 2011225C1
            AutoCAD Inventor Professional Suite 2011462C1
            AutoCAD LT 2011057C1
            AutoCAD Map 3D 2011129C1
            AutoCAD Mechanical 2011206C1
            AutoCAD MEP 2011235C1
            AutoCAD P&ID 2011448C1
            AutoCAD Plant 3D 2011426C1
            Autodesk Maya 2011657C1
            Autodesk Revit Architecture Suite 2011241C1
            Autodesk Revit Structure Suite 2011256C1
            Autodesk Inventor Publisher 2011666C1
            AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce ...","snippets":["AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS DESIGN V2012 WIN32-ISO keygen by xforce AUTODESK ALIAS - -

            Step 3: Disable Your Internet Connection and Antivirus

            -

            The third step is to disable your internet connection and antivirus before activating your Autodesk product. This is to prevent Autodesk from detecting and blocking the activation process. You can disconnect your internet cable or turn off your Wi-Fi. You can also temporarily disable your antivirus software or firewall from the settings. Remember to turn them back on after the activation is done.

            -

            Step 4: Run Xforce Keygen 2011 and Generate an Activation Code

            -

            The fourth step is to run Xforce Keygen 2011 and generate an activation code for your Autodesk product. You can find Xforce Keygen 2011 in the folder where you installed it. Right-click on it and choose Run as administrator. You will see a window like this:

            -Xforce Keygen 2011 window -

            Make sure you select the correct product from the drop-down list. Then, click on the Patch button. You should see a message saying "Successfully patched". Next, copy the request code from the Autodesk activation screen and paste it into the keygen. Then, click on the Generate button. You will get an activation code like this:

            -Xforce Keygen 2011 activation code -

            Step 5: Enter the Activation Code and Enjoy Your Autodesk Product

            -

            The final step is to enter the activation code into the Autodesk activation screen and enjoy your Autodesk product. Go back to the Autodesk activation screen and select "I have an activation code from Autodesk". Then, copy and paste the activation code from the keygen into the fields. Click on Next to finish the activation process. You should see a message saying "Thank you for activating your Autodesk product". Congratulations! You have successfully activated your Autodesk product with Xforce Keygen 2011.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Brainsbreaker 5 Activation Codel.md b/spaces/inreVtussa/clothingai/Examples/Brainsbreaker 5 Activation Codel.md deleted file mode 100644 index 74d8d3bcb0b2e8f91055c21dffb67734b4f5753f..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Brainsbreaker 5 Activation Codel.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Brainsbreaker 5 Activation Codel


            DOWNLOADhttps://tiurll.com/2uClL0



            - -You have to be careful when using TV shows, cracks, torrents, keygens and wareses that you download from crack sites. They often contain adware, spyware, malware or . You must be careful when using TV shows, cracks, torrents, keygens and warez that you download from crack sites. They often contain adware, spyware, malware or viruses. Think twice before sharing a download link with friends or acquaintances. Many of these links have already been checked and are free of viruses. But you have to be careful and check every site because some may still have viruses. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_ip2p.py b/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_ip2p.py deleted file mode 100644 index 3c727d3b75332508629458d23f7fb86cc9ede44b..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/sd_hijack_ip2p.py +++ /dev/null @@ -1,13 +0,0 @@ -import collections -import os.path -import sys -import gc -import time - -def should_hijack_ip2p(checkpoint_info): - from modules import sd_models_config - - ckpt_basename = os.path.basename(checkpoint_info.filename).lower() - cfg_basename = os.path.basename(sd_models_config.find_checkpoint_config_near_filename(checkpoint_info)).lower() - - return "pix2pix" in ckpt_basename and not "pix2pix" in cfg_basename diff --git a/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/README.md b/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/README.md deleted file mode 100644 index dd26bb48b7028d041ab4c1af19780bdc0052eff5..0000000000000000000000000000000000000000 --- a/spaces/jackrui/diff-amp-antimicrobial_peptide_generation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AMP Classification -emoji: 🐠 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/sabio_download.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/sabio_download.py deleted file mode 100644 index 406dd3e908cf6369ef288df441661f02b566a1e0..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/sabio_download.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN -# Date: 2020-07-10 - -import requests - -# Extract EC number list from ExPASy, which is a repository of information relative to the nomenclature of enzymes. -def eclist(): - with open('../../Data/EC_enzyme/enzyme.dat', 'r') as outfile : - lines = outfile.readlines() - - ec_list = list() - for line in lines : - if line.startswith('ID') : - ec = line.strip().split(' ')[1] - ec_list.append(ec) - # print(ec_list) - print(len(ec_list)) # 7906 - return ec_list - -def sabio_info(allEC): - QUERY_URL = 'http://sabiork.h-its.org/sabioRestWebServices/kineticlawsExportTsv' - - # specify search fields and search terms - - # query_dict = {"Organism":'"lactococcus lactis subsp. lactis bv. diacetylactis"', "Product":'"Tyrosine"'} - # query_dict = {"Organism":'"lactococcus lactis subsp. lactis bv. diacetylactis"',} #saccharomyces cerevisiae escherichia coli - # query_dict = {"ECNumber":'"1.1.1.1"',} - i = 0 - for EC in allEC : - i += 1 - print('This is %d ----------------------------' %i) - print(EC) - query_dict = {"ECNumber":'%s' %EC,} - query_string = ' AND '.join(['%s:%s' % (k,v) for k,v in query_dict.items()]) - - - # specify output fields and send request - - query = {'fields[]':['EntryID', 'Substrate', 'EnzymeType', 'PubMedID', 'Organism', 'UniprotID','ECNumber','Parameter'], 'q':query_string} - # the 'Smiles' keyword could get all the smiles included in substrate and product - - request = requests.post(QUERY_URL, params = query) - # request.raise_for_status() - - - # results - results = request.text - print(results) - print('---------------------------------------------') - - if results : - with open('../../Data/database/Kcat_sabio_4/%s.txt' %EC, 'w') as ECfile : - ECfile.write(results) - - -if __name__ == '__main__' : - allEC = eclist() - sabio_info(allEC) - - diff --git a/spaces/jitesh/storytelling/src/create_statistics.py b/spaces/jitesh/storytelling/src/create_statistics.py deleted file mode 100644 index 29c77970342bf8641315284ae315f2cf4d45cb11..0000000000000000000000000000000000000000 --- a/spaces/jitesh/storytelling/src/create_statistics.py +++ /dev/null @@ -1,83 +0,0 @@ -import random - -import numpy as np -import plotly.express as px -import streamlit as st -import xlsxwriter -import pandas as pd - -from .lib import initialise_storytelling, set_input -import io - - -def run_create_statistics(gen, container_guide, container_param, container_button): - - first_sentence, first_emotion, length = initialise_storytelling( - gen, container_guide, container_param, container_button) - # story_till_now = first_sentence - - num_generation = set_input(container_param, - label='Number of generation', min_value=1, max_value=100, value=5, step=1, - key_slider='num_generation_slider', key_input='num_generation_input',) - - num_tests = set_input(container_param, - label='Number of tests', min_value=1, max_value=1000, value=3, step=1, - key_slider='num_tests_slider', key_input='num_tests_input',) - - reaction_weight_mode = container_param.radio( - "Reaction Weight w:", ["Random", "Fixed"]) - if reaction_weight_mode == "Fixed": - reaction_weight = set_input(container_param, - label='Reaction Weight w', min_value=0.0, max_value=1.0, value=0.5, step=0.01, - key_slider='w_slider', key_input='w_input',) - elif reaction_weight_mode == "Random": - reaction_weight = -1 - if container_button.button('Analyse'): - gen.get_stats(story_till_now=first_sentence, - num_generation=num_generation, length=length, reaction_weight=reaction_weight, num_tests=num_tests) - # if len(gen.stories) > 0: - # for si, story in enumerate(gen.stories): - # st.markdown(f'### Story no. {si}:', unsafe_allow_html=False) - # st.markdown(story, unsafe_allow_html=False) - # data=gen.stats_df[gen.stats_df.sentence_no==3] - # fig = px.violin(data_frame=data, x="reaction_weight", y="num_reactions", hover_data=data.columns) - # st.plotly_chart(fig, use_container_width=True) - # fig2 = px.box(data_frame=data, x="reaction_weight", y="num_reactions", hover_data=data.columns) - # st.plotly_chart(fig2, use_container_width=True) - if len(gen.data) > 0: - for si, story in enumerate(gen.data): - st.markdown(f'### Story {si}:', unsafe_allow_html=False) - for i, sentence in enumerate(story): - col_turn, col_sentence, col_emo = st.columns([1, 8, 2]) - col_turn.markdown( - sentence['turn'], unsafe_allow_html=False) - col_sentence.markdown( - sentence['sentence'], unsafe_allow_html=False) - col_emo.markdown( - f'{sentence["emotion"]} {np.round(sentence["confidence_score"], 3)}', unsafe_allow_html=False) - st.table(data=gen.stats_df, ) - data = gen.stats_df[gen.stats_df.sentence_no == 3] - fig = px.violin(data_frame=data, x="reaction_weight", - y="num_reactions", hover_data=data.columns) - st.plotly_chart(fig, use_container_width=True) - fig2 = px.box(data_frame=data, x="reaction_weight", - y="num_reactions", hover_data=data.columns) - st.plotly_chart(fig2, use_container_width=True) - # csv = gen.stats_df.to_csv().encode('utf-8') - - buffer = io.BytesIO() - with pd.ExcelWriter(buffer, engine='xlsxwriter') as writer: - # Write each dataframe to a different worksheet. - gen.stats_df.to_excel(writer, sheet_name='AllData') - - # Close the Pandas Excel writer and output the Excel file to the buffer - writer.save() - st.download_button( - label="Download data", - data=buffer, - file_name='data.xlsx', - mime='application/vnd.ms-excel', - ) - else: - container_guide.markdown( - '### You selected statistics. Now set your parameters and click the `Analyse` button.') diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/_sync.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/_sync.py deleted file mode 100644 index e944784dee94ac3ac39ff27a48653d534b638068..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/_sync.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -import selectors -import socket -import ssl -import struct -import threading -import time - -import aioquic.quic.configuration # type: ignore -import aioquic.quic.connection # type: ignore -import aioquic.quic.events # type: ignore - -import dns.exception -import dns.inet -from dns.quic._common import ( - QUIC_MAX_DATAGRAM, - BaseQuicConnection, - BaseQuicManager, - BaseQuicStream, - UnexpectedEOF, -) - -# Avoid circularity with dns.query -if hasattr(selectors, "PollSelector"): - _selector_class = selectors.PollSelector # type: ignore -else: - _selector_class = selectors.SelectSelector # type: ignore - - -class SyncQuicStream(BaseQuicStream): - def __init__(self, connection, stream_id): - super().__init__(connection, stream_id) - self._wake_up = threading.Condition() - self._lock = threading.Lock() - - def wait_for(self, amount, expiration): - while True: - timeout = self._timeout_from_expiration(expiration) - with self._lock: - if self._buffer.have(amount): - return - self._expecting = amount - with self._wake_up: - if not self._wake_up.wait(timeout): - raise dns.exception.Timeout - self._expecting = 0 - - def receive(self, timeout=None): - expiration = self._expiration_from_timeout(timeout) - self.wait_for(2, expiration) - with self._lock: - (size,) = struct.unpack("!H", self._buffer.get(2)) - self.wait_for(size, expiration) - with self._lock: - return self._buffer.get(size) - - def send(self, datagram, is_end=False): - data = self._encapsulate(datagram) - self._connection.write(self._stream_id, data, is_end) - - def _add_input(self, data, is_end): - if self._common_add_input(data, is_end): - with self._wake_up: - self._wake_up.notify() - - def close(self): - with self._lock: - self._close() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - with self._wake_up: - self._wake_up.notify() - return False - - -class SyncQuicConnection(BaseQuicConnection): - def __init__(self, connection, address, port, source, source_port, manager): - super().__init__(connection, address, port, source, source_port, manager) - self._socket = socket.socket(self._af, socket.SOCK_DGRAM, 0) - self._socket.connect(self._peer) - (self._send_wakeup, self._receive_wakeup) = socket.socketpair() - self._receive_wakeup.setblocking(False) - self._socket.setblocking(False) - if self._source is not None: - try: - self._socket.bind( - dns.inet.low_level_address_tuple(self._source, self._af) - ) - except Exception: - self._socket.close() - raise - self._handshake_complete = threading.Event() - self._worker_thread = None - self._lock = threading.Lock() - - def _read(self): - count = 0 - while count < 10: - count += 1 - try: - datagram = self._socket.recv(QUIC_MAX_DATAGRAM) - except BlockingIOError: - return - with self._lock: - self._connection.receive_datagram(datagram, self._peer[0], time.time()) - - def _drain_wakeup(self): - while True: - try: - self._receive_wakeup.recv(32) - except BlockingIOError: - return - - def _worker(self): - try: - sel = _selector_class() - sel.register(self._socket, selectors.EVENT_READ, self._read) - sel.register(self._receive_wakeup, selectors.EVENT_READ, self._drain_wakeup) - while not self._done: - (expiration, interval) = self._get_timer_values(False) - items = sel.select(interval) - for key, _ in items: - key.data() - with self._lock: - self._handle_timer(expiration) - datagrams = self._connection.datagrams_to_send(time.time()) - for datagram, _ in datagrams: - try: - self._socket.send(datagram) - except BlockingIOError: - # we let QUIC handle any lossage - pass - self._handle_events() - finally: - with self._lock: - self._done = True - # Ensure anyone waiting for this gets woken up. - self._handshake_complete.set() - - def _handle_events(self): - while True: - with self._lock: - event = self._connection.next_event() - if event is None: - return - if isinstance(event, aioquic.quic.events.StreamDataReceived): - with self._lock: - stream = self._streams.get(event.stream_id) - if stream: - stream._add_input(event.data, event.end_stream) - elif isinstance(event, aioquic.quic.events.HandshakeCompleted): - self._handshake_complete.set() - elif isinstance( - event, aioquic.quic.events.ConnectionTerminated - ) or isinstance(event, aioquic.quic.events.StreamReset): - with self._lock: - self._done = True - - def write(self, stream, data, is_end=False): - with self._lock: - self._connection.send_stream_data(stream, data, is_end) - self._send_wakeup.send(b"\x01") - - def run(self): - if self._closed: - return - self._worker_thread = threading.Thread(target=self._worker) - self._worker_thread.start() - - def make_stream(self, timeout=None): - if not self._handshake_complete.wait(timeout): - raise dns.exception.Timeout - with self._lock: - if self._done: - raise UnexpectedEOF - stream_id = self._connection.get_next_available_stream_id(False) - stream = SyncQuicStream(self, stream_id) - self._streams[stream_id] = stream - return stream - - def close_stream(self, stream_id): - with self._lock: - super().close_stream(stream_id) - - def close(self): - with self._lock: - if self._closed: - return - self._manager.closed(self._peer[0], self._peer[1]) - self._closed = True - self._connection.close() - self._send_wakeup.send(b"\x01") - self._worker_thread.join() - - -class SyncQuicManager(BaseQuicManager): - def __init__(self, conf=None, verify_mode=ssl.CERT_REQUIRED, server_name=None): - super().__init__(conf, verify_mode, SyncQuicConnection, server_name) - self._lock = threading.Lock() - - def connect(self, address, port=853, source=None, source_port=0): - with self._lock: - (connection, start) = self._connect(address, port, source, source_port) - if start: - connection.run() - return connection - - def closed(self, address, port): - with self._lock: - super().closed(address, port) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - # Copy the iterator into a list as exiting things will mutate the connections - # table. - connections = list(self._connections.values()) - for connection in connections: - connection.close() - return False diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/utils/__init__.py b/spaces/jordonpeter01/MusicGen/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/metrics.py b/spaces/juancopi81/youtube-music-transcribe/t5x/metrics.py deleted file mode 100644 index 93d9a50f9f7d58d66b4a882b1219d3adb39392ac..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/metrics.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""T5X Metrics. - -Defines Metric objects and collections used by T5X models. These objects use the -CLU metrics library -""" - -import dataclasses -from typing import MutableMapping, Optional, Union - -from clu import metrics as clu_metrics -import flax # Only used for flax.struct.dataclass. -import jax -from jax.experimental.global_device_array import GlobalDeviceArray -import jax.numpy as jnp -import numpy as np - -MetricsMap = MutableMapping[str, clu_metrics.Metric] -Scalar = Union[int, float, np.number, np.ndarray, jnp.ndarray] - - -def _check_param(value, *, ndim=None, dtype=jnp.float32): - """Raises a `ValueError` if `value` does not match ndim/dtype. - - Args: - value: Value to be tested. - ndim: Expected dimensions. - dtype: Expected dtype. - - Raises: - A `ValueError` if `value` does not match `ndim` or `dtype`, or if `value` - is not an instance of `jnp.ndarray`. - """ - if ndim is not None and value.ndim != ndim: - raise ValueError(f"Expected ndim={ndim}, got ndim={value.ndim}") - if dtype is not None and value.dtype != dtype: - raise ValueError(f"Expected dtype={dtype}, got dtype={value.dtype}") - - -@flax.struct.dataclass -class Sum(clu_metrics.Metric): - """Computes the sum of a scalar or a batch of tensors. - - See also documentation of `Metric`. - """ - - total: Scalar - - @classmethod - def from_model_output(cls, values: Scalar, **_) -> clu_metrics.Metric: - """Initializes a Sum Metric from array (or singular) values. - - Args: - values: array of values to sum (or a single value). - - Returns: - A Sum object. - """ - values = jnp.asarray(values) - if values.ndim == 0: - values = values[None] - return cls(total=values.sum()) - - def merge(self, other: "Sum") -> "Sum": - return type(self)(total=self.total + other.total) - - def compute(self) -> Scalar: - return self.total - - -@flax.struct.dataclass -class Step(clu_metrics.Metric): - """Abstract class representing a per-step or step-per metric. - - Tracks number of steps. Must be set manually using replace_steps, since the - use of microbatches may otherwise cause the computation to be incorrect. - - See also documentation of `Metric`. - """ - steps: Optional[int] = 1 - - def replace_steps(self, steps) -> "Step": - return self.replace(steps=steps) - - def compute(self) -> Scalar: - if self.steps is None: - raise ValueError( - "`steps` must be set by calling `replace_steps` before computing metric." - ) - return self.steps - - -@flax.struct.dataclass -class AveragePerStep(Step): - """Represents per-step average (total divided by number of steps). - - See also documentation of `Step`. - """ - total: Optional[Scalar] = None - - @classmethod - def from_model_output(cls, - values: Scalar, - steps: Optional[int] = 1, - **_) -> clu_metrics.Metric: - """Initializes an AveragePerStep Metric from array (or singular) values. - - Args: - values: array of values to sum (or a single value). - steps: number of steps, defaults to 1. - - Returns: - AveragePerStep object. - """ - values = jnp.asarray(values) - if values.ndim == 0: - values = values[None] - return cls(total=values.sum(), steps=steps) - - def merge(self, other: "AveragePerStep") -> "AveragePerStep": - assert type(self) is type(other) - return type(self)( - total=self.total + other.total, steps=self.steps + other.steps) - - def compute(self) -> Scalar: - steps = super().compute() - if self.total is None: - raise ValueError("`AveragePerStep` `total` cannot be None.") - return self.total / steps - - -@flax.struct.dataclass -class Time(clu_metrics.Metric): - """Computes the sum of a float-valued metric over a period of time. - - Duration (the denominator) must be set manually. This is because JAX does not - properly support time functions inside compiled functions. Calling time.time() - inside a compiled function results in the stored time being the compilation - time, not the run time. - - See also documentation of `Metric`. - """ - duration: Optional[Scalar] = None - - def merge(self, other: "Time") -> "Time": - return self - - def compute(self) -> Scalar: - if self.duration is None: - raise ValueError( - "`Time` `duration` must be set by calling `replace_duration` before computing." - ) - return self.duration - - def replace_duration(self, duration: Scalar) -> "Time": - """Replaces duration with the given value. - - Should be used outside a compiled function to set the duration of the - metric. - - Args: - duration: metric duration - - Returns: - A new Time object. - """ - return self.replace(duration=duration) - - -@flax.struct.dataclass -class TimeRate(Time): - """Computes the sum of a float-valued metric over a period of time. - - Duration (the denominator) must be set using replace_duration. This is because - JAX does not properly support time functions inside compiled functions. - Calling time.time() inside a compiled function results in the stored time - being the compilation time, not the run time. - - See also documentation of `Time` and `Metric`. - """ - - numerator: Optional[jnp.ndarray] = None - - @classmethod - def from_model_output(cls, numerator: float, **_) -> clu_metrics.Metric: - """Initializes a TimeRate Metric from a float value (the numerator). - - Args: - numerator: a float (numerator of the metric) - - Returns: - A TimeRate object. - """ - return cls(numerator=numerator) - - def merge(self, other: "TimeRate") -> "TimeRate": - assert_msg = "Merging with non-None durations is currently not supported." - assert self.duration is None and other.duration is None, assert_msg - return type(self)(numerator=self.numerator + other.numerator) - - def compute(self) -> Scalar: - duration = super().compute() - return self.numerator / duration - - def replace_duration(self, duration: Scalar) -> "Time": - if not (isinstance(self.numerator, np.ndarray) or - isinstance(self.numerator, GlobalDeviceArray)): - raise ValueError( - "Expected numerator to be of type np.ndarray or GlobalDeviceArray " - "since method should be called outside of a compiled function. " - "Got ", type(self.numerator)) - return super().replace_duration(duration) - - -@flax.struct.dataclass -class StepsPerTime(Step, Time): - """Represents a metric computed as number of steps per time. - - See also documentation of `Step`. - """ - - @classmethod - def from_model_output(cls, - steps: Optional[int] = 1, - **_) -> clu_metrics.Metric: - """Initializes an StepsPerTime Metric. - - Args: - steps: number of steps, defaults to 1. - - Returns: - StepsPerTime object. - """ - return cls(steps=steps) - - def merge(self, other: "StepsPerTime") -> "StepsPerTime": - assert type(self) is type(other) - return type(self)(steps=self.steps + other.steps) - - def compute(self) -> Scalar: - steps = Step.compute(self) - duration = Time.compute(self) - return steps / duration - - -def is_metric_obj(obj): - return isinstance(obj, clu_metrics.Metric) - - -def is_time_metric(obj): - return isinstance(obj, Time) - - -def create_metrics_dict(float_metrics_dict): - """Input: dict{str: float} | Output: dict{str: Metric}.""" - return {k: Sum(v) for k, v in float_metrics_dict.items()} - - -def shape_obj_to_defined_obj(obj: clu_metrics.Metric): - """Converts shapes in Metric to zero arrays. - - obj should be a Metric object subclass where each member variable is a - ShapeDtypeStruct (from jax.eval_shape). A new object of the same class where - each member variable is an array of zeros with the same shape and type as - the corresponding variable defined by ShapeDtypeStruct. - - Args: - obj: a clu.metrics.Metric object where each member variable is a - ShapeDtypeStruct (from jax.eval_shape) - - Returns: - A Metric object with class variables initialized as zero arrays. - """ - - def class_attr_shape(a): - attr = getattr(obj, a.name) - if isinstance(attr, clu_metrics.Metric): - return shape_obj_to_defined_obj(attr) - else: - if hasattr(attr, "shape"): - return jnp.zeros(shape=attr.shape, dtype=attr.dtype) - else: - return attr - - return obj.__class__( - **{a.name: class_attr_shape(a) for a in dataclasses.fields(obj)}) - - -def set_time_metrics_duration(metrics, duration): - """Sets duration for TimeRate objects in metrics pytree.""" - - def fn(o): - if isinstance(o, Time): - return o.replace_duration(duration) - else: - return o - - return jax.tree_map(fn, metrics, is_leaf=lambda obj: isinstance(obj, Time)) - - -def set_step_metrics_num_steps(metrics, num_steps): - """Sets steps for Step objects in metrics pytree.""" - - def fn(o): - if isinstance(o, Step): - return o.replace_steps(num_steps) - else: - return o - - return jax.tree_map(fn, metrics, is_leaf=is_metric_obj) diff --git a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/PaLM.py b/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/PaLM.py deleted file mode 100644 index 8e77e8fe13004ffbd7665a29957b7567af3af80a..0000000000000000000000000000000000000000 --- a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/PaLM.py +++ /dev/null @@ -1,11 +0,0 @@ -from .base_model import BaseLLMModel, CallbackToIterator, ChuanhuCallbackHandler -from langchain.chat_models import ChatGooglePalm -import os - -class PaLM_Client(BaseLLMModel): - def __init__(self, model_name, user="") -> None: - super().__init__(model_name, user) - self.llm = ChatGooglePalm(google_api_key=os.environ["GOOGLE_PALM_API_KEY"]) - - def get_answer_at_once(self): - self.llm.generate(self.history) \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/modules/models/base_model.py b/spaces/kaicheng/ChatGPT_ad/modules/models/base_model.py deleted file mode 100644 index 0c703b6750cbea953bbe8e97a806473831035c0a..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/modules/models/base_model.py +++ /dev/null @@ -1,685 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback -import pathlib - -from tqdm import tqdm -import colorama -from duckduckgo_search import DDGS -from itertools import islice -import asyncio -import aiohttp -from enum import Enum - -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult -from threading import Thread, Condition -from collections import deque - -from ..presets import * -from ..index_func import * -from ..utils import * -from .. import shared -from ..config import retrieve_proxy - -class CallbackToIterator: - def __init__(self): - self.queue = deque() - self.cond = Condition() - self.finished = False - - def callback(self, result): - with self.cond: - self.queue.append(result) - self.cond.notify() # Wake up the generator. - - def __iter__(self): - return self - - def __next__(self): - with self.cond: - while not self.queue and not self.finished: # Wait for a value to be added to the queue. - self.cond.wait() - if not self.queue: - raise StopIteration() - return self.queue.popleft() - - def finish(self): - with self.cond: - self.finished = True - self.cond.notify() # Wake up the generator if it's waiting. - -def get_action_description(text): - match = re.search('```(.*?)```', text, re.S) - json_text = match.group(1) - # 把json转化为python字典 - json_dict = json.loads(json_text) - # 提取'action'和'action_input'的值 - action_name = json_dict['action'] - action_input = json_dict['action_input'] - if action_name != "Final Answer": - return f'

            {action_name}: {action_input}

            ' - else: - return "" - -class ChuanhuCallbackHandler(BaseCallbackHandler): - - def __init__(self, callback) -> None: - """Initialize callback handler.""" - self.callback = callback - - def on_agent_action( - self, action: AgentAction, color: Optional[str] = None, **kwargs: Any - ) -> Any: - self.callback(get_action_description(action.log)) - - def on_tool_end( - self, - output: str, - color: Optional[str] = None, - observation_prefix: Optional[str] = None, - llm_prefix: Optional[str] = None, - **kwargs: Any, - ) -> None: - """If not the final action, print out observation.""" - # if observation_prefix is not None: - # self.callback(f"\n\n{observation_prefix}") - # self.callback(output) - # if llm_prefix is not None: - # self.callback(f"\n\n{llm_prefix}") - if observation_prefix is not None: - logging.info(observation_prefix) - self.callback(output) - if llm_prefix is not None: - logging.info(llm_prefix) - - def on_agent_finish( - self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any - ) -> None: - # self.callback(f"{finish.log}\n\n") - logging.info(finish.log) - - def on_llm_new_token(self, token: str, **kwargs: Any) -> None: - """Run on new LLM token. Only available when streaming is enabled.""" - self.callback(token) - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - Minimax = 7 - ChuanhuAgent = 8 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - elif "minimax" in model_name_lower: - model_type = ModelType.Minimax - elif "川虎助理" in model_name_lower: - model_type = ModelType.ChuanhuAgent - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - if display_append: - display_append = "
            " +display_append - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - status = i18n("索引构建完成") - return gr.Files.update(), chatbot, status - - def summarize_index(self, files, chatbot, language): - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - status = i18n("总结完成") - logging.info(i18n("生成内容总结中……")) - os.environ["OPENAI_API_KEY"] = self.api_key - from langchain.chains.summarize import load_summarize_chain - from langchain.prompts import PromptTemplate - from langchain.chat_models import ChatOpenAI - from langchain.callbacks import StdOutCallbackHandler - prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":" - PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) - llm = ChatOpenAI() - chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"] - print(i18n("总结") + f": {summary}") - chatbot.append([i18n("上传了")+str(len(files))+"个文件", summary]) - return chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.vectorstores.base import VectorStoreRetriever - limited_context = True - msg = "加载索引中……" - logging.info(msg) - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - with retrieve_proxy(): - retriever = VectorStoreRetriever(vectorstore=index, search_type="similarity_score_threshold",search_kwargs={"k":6, "score_threshold": 0.5}) - relevant_documents = retriever.get_relevant_documents(real_inputs) - reference_results = [[d.page_content.strip("�"), os.path.basename(d.metadata["source"])] for d in relevant_documents] - reference_results = add_source_numbers(reference_results) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - search_results = [] - with DDGS() as ddgs: - ddgs_gen = ddgs.text(real_inputs, backend="lite") - for r in islice(ddgs_gen, 10): - search_results.append(r) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result['href']).host - reference_results.append([result['body'], result['href']]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
          • {result['title']}
          • \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
              \n\n" + "".join(display_append) + "
            " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "用户" + f"{self.user_identifier}" + "的输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - self.auto_save(chatbot) - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - if len(self.all_token_counts) > 0: - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename(os.path.join(HISTORY_DIR, self.user_identifier)))).touch() - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def auto_save(self, chatbot): - history_file_path = get_history_filepath(self.user_identifier) - save_file(history_file_path, self.system_prompt, self.history, chatbot, self.user_identifier) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - logging.info(f"filename: {filename}") - if type(filename) != str and filename is not None: - filename = filename.name - try: - if "/" not in filename: - history_file_path = os.path.join(HISTORY_DIR, user_name, filename) - else: - history_file_path = filename - with open(history_file_path, "r", encoding="utf-8") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return os.path.basename(filename), json_s["system"], json_s["chatbot"] - except: - # 没有对话历史或者对话历史解析失败 - logging.info(f"没有找到对话历史记录 {filename}") - return gr.update(), self.system_prompt, gr.update() - - def auto_load(self): - if self.user_identifier == "": - self.reset() - return self.system_prompt, gr.update() - history_file_path = get_history_filepath(self.user_identifier) - filename, system_prompt, chatbot = self.load_chat_history(history_file_path, self.user_identifier) - return system_prompt, chatbot - - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/kcagle/AutoGPT/autogpt/json_utils/__init__.py b/spaces/kcagle/AutoGPT/autogpt/json_utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kevinwang676/Bert-VITS2/text/tone_sandhi.py b/spaces/kevinwang676/Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/pirender/config.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/pirender/config.py deleted file mode 100644 index c3f917385b5b1f7ed2809d963d3ad0f0c754459b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/pirender/config.py +++ /dev/null @@ -1,211 +0,0 @@ -import collections -import functools -import os -import re - -import yaml - -class AttrDict(dict): - """Dict as attribute trick.""" - - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - for key, value in self.__dict__.items(): - if isinstance(value, dict): - self.__dict__[key] = AttrDict(value) - elif isinstance(value, (list, tuple)): - if isinstance(value[0], dict): - self.__dict__[key] = [AttrDict(item) for item in value] - else: - self.__dict__[key] = value - - def yaml(self): - """Convert object to yaml dict and return.""" - yaml_dict = {} - for key, value in self.__dict__.items(): - if isinstance(value, AttrDict): - yaml_dict[key] = value.yaml() - elif isinstance(value, list): - if isinstance(value[0], AttrDict): - new_l = [] - for item in value: - new_l.append(item.yaml()) - yaml_dict[key] = new_l - else: - yaml_dict[key] = value - else: - yaml_dict[key] = value - return yaml_dict - - def __repr__(self): - """Print all variables.""" - ret_str = [] - for key, value in self.__dict__.items(): - if isinstance(value, AttrDict): - ret_str.append('{}:'.format(key)) - child_ret_str = value.__repr__().split('\n') - for item in child_ret_str: - ret_str.append(' ' + item) - elif isinstance(value, list): - if isinstance(value[0], AttrDict): - ret_str.append('{}:'.format(key)) - for item in value: - # Treat as AttrDict above. - child_ret_str = item.__repr__().split('\n') - for item in child_ret_str: - ret_str.append(' ' + item) - else: - ret_str.append('{}: {}'.format(key, value)) - else: - ret_str.append('{}: {}'.format(key, value)) - return '\n'.join(ret_str) - - -class Config(AttrDict): - r"""Configuration class. This should include every human specifiable - hyperparameter values for your training.""" - - def __init__(self, filename=None, args=None, verbose=False, is_train=True): - super(Config, self).__init__() - # Set default parameters. - # Logging. - - large_number = 1000000000 - self.snapshot_save_iter = large_number - self.snapshot_save_epoch = large_number - self.snapshot_save_start_iter = 0 - self.snapshot_save_start_epoch = 0 - self.image_save_iter = large_number - self.eval_epoch = large_number - self.start_eval_epoch = large_number - self.eval_epoch = large_number - self.max_epoch = large_number - self.max_iter = large_number - self.logging_iter = 100 - self.image_to_tensorboard=False - self.which_iter = 0 # args.which_iter - self.resume = False - - self.checkpoints_dir = '/Users/shadowcun/Downloads/' - self.name = 'face' - self.phase = 'train' if is_train else 'test' - - # Networks. - self.gen = AttrDict(type='generators.dummy') - self.dis = AttrDict(type='discriminators.dummy') - - # Optimizers. - self.gen_optimizer = AttrDict(type='adam', - lr=0.0001, - adam_beta1=0.0, - adam_beta2=0.999, - eps=1e-8, - lr_policy=AttrDict(iteration_mode=False, - type='step', - step_size=large_number, - gamma=1)) - self.dis_optimizer = AttrDict(type='adam', - lr=0.0001, - adam_beta1=0.0, - adam_beta2=0.999, - eps=1e-8, - lr_policy=AttrDict(iteration_mode=False, - type='step', - step_size=large_number, - gamma=1)) - # Data. - self.data = AttrDict(name='dummy', - type='datasets.images', - num_workers=0) - self.test_data = AttrDict(name='dummy', - type='datasets.images', - num_workers=0, - test=AttrDict(is_lmdb=False, - roots='', - batch_size=1)) - self.trainer = AttrDict( - model_average=False, - model_average_beta=0.9999, - model_average_start_iteration=1000, - model_average_batch_norm_estimation_iteration=30, - model_average_remove_sn=True, - image_to_tensorboard=False, - hparam_to_tensorboard=False, - distributed_data_parallel='pytorch', - delay_allreduce=True, - gan_relativistic=False, - gen_step=1, - dis_step=1) - - # # Cudnn. - self.cudnn = AttrDict(deterministic=False, - benchmark=True) - - # Others. - self.pretrained_weight = '' - self.inference_args = AttrDict() - - - # Update with given configurations. - assert os.path.exists(filename), 'File {} not exist.'.format(filename) - loader = yaml.SafeLoader - loader.add_implicit_resolver( - u'tag:yaml.org,2002:float', - re.compile(u'''^(?: - [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)? - |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+) - |\\.[0-9_]+(?:[eE][-+][0-9]+)? - |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]* - |[-+]?\\.(?:inf|Inf|INF) - |\\.(?:nan|NaN|NAN))$''', re.X), - list(u'-+0123456789.')) - try: - with open(filename, 'r') as f: - cfg_dict = yaml.load(f, Loader=loader) - except EnvironmentError: - print('Please check the file with name of "%s"', filename) - recursive_update(self, cfg_dict) - - # Put common opts in both gen and dis. - if 'common' in cfg_dict: - self.common = AttrDict(**cfg_dict['common']) - self.gen.common = self.common - self.dis.common = self.common - - - if verbose: - print(' config '.center(80, '-')) - print(self.__repr__()) - print(''.center(80, '-')) - - -def rsetattr(obj, attr, val): - """Recursively find object and set value""" - pre, _, post = attr.rpartition('.') - return setattr(rgetattr(obj, pre) if pre else obj, post, val) - - -def rgetattr(obj, attr, *args): - """Recursively find object and return value""" - - def _getattr(obj, attr): - r"""Get attribute.""" - return getattr(obj, attr, *args) - - return functools.reduce(_getattr, [obj] + attr.split('.')) - - -def recursive_update(d, u): - """Recursively update AttrDict d with AttrDict u""" - for key, value in u.items(): - if isinstance(value, collections.abc.Mapping): - d.__dict__[key] = recursive_update(d.get(key, AttrDict({})), value) - elif isinstance(value, (list, tuple)): - if isinstance(value[0], dict): - d.__dict__[key] = [AttrDict(item) for item in value] - else: - d.__dict__[key] = value - else: - d.__dict__[key] = value - return d diff --git a/spaces/kevinwang676/FreeVC-en/mel_processing.py b/spaces/kevinwang676/FreeVC-en/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC-en/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/kevinwang676/voice-conversion-yourtts/id3tagging.py b/spaces/kevinwang676/voice-conversion-yourtts/id3tagging.py deleted file mode 100644 index d523de20cf9a8de98f7317dbb7e01c546fcd22da..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/voice-conversion-yourtts/id3tagging.py +++ /dev/null @@ -1,14 +0,0 @@ -from mutagen.wave import WAVE -from mutagen.id3._frames import * - -def add_id3_tag(filename, text, speakername, seed): - audio = WAVE(filename) - if speakername == None: - speakername = "Unconditional" - - # write id3 tag with text truncated to 60 chars, as a precaution... - audio["TIT2"] = TIT2(encoding=3, text=text[:60]) - audio["TPE1"] = TPE1(encoding=3, text=f"Voice {speakername} using Seed={seed}") - audio["TPUB"] = TPUB(encoding=3, text="Bark by Suno AI") - audio["COMMENT"] = COMM(encoding=3, text="Generated with Bark GUI - Text-Prompted Generative Audio Model. Visit https://github.com/C0untFloyd/bark-gui") - audio.save() \ No newline at end of file diff --git a/spaces/kingfisher/spacy-ner/README.md b/spaces/kingfisher/spacy-ner/README.md deleted file mode 100644 index f349589065874a93014332a1aad493ae4a37cbef..0000000000000000000000000000000000000000 --- a/spaces/kingfisher/spacy-ner/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Spacy Ner -emoji: 📊 -colorFrom: indigo -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/drive.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/drive.py deleted file mode 100644 index 3cbfda8ae74bdf26c5aef197ff2866a7c7ad0cfd..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/datasets/drive.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/byte_level_bpe/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/kosurisiva/MyGenAiChatBot/app.py b/spaces/kosurisiva/MyGenAiChatBot/app.py deleted file mode 100644 index 2dbf3ae89c2e3fdab7134107dd346f984dca8eb1..0000000000000000000000000000000000000000 --- a/spaces/kosurisiva/MyGenAiChatBot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/kouenYoung/anime-tts/text/sanskrit.py b/spaces/kouenYoung/anime-tts/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/kouenYoung/anime-tts/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/E_B_L_C_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/E_B_L_C_.py deleted file mode 100644 index 9cc60ff82d23d9348dc956b8c4f44139226e4de6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/E_B_L_C_.py +++ /dev/null @@ -1,717 +0,0 @@ -from fontTools.misc import sstruct -from . import DefaultTable -from fontTools.misc.textTools import bytesjoin, safeEval -from .BitmapGlyphMetrics import ( - BigGlyphMetrics, - bigGlyphMetricsFormat, - SmallGlyphMetrics, - smallGlyphMetricsFormat, -) -import struct -import itertools -from collections import deque -import logging - - -log = logging.getLogger(__name__) - -eblcHeaderFormat = """ - > # big endian - version: 16.16F - numSizes: I -""" -# The table format string is split to handle sbitLineMetrics simply. -bitmapSizeTableFormatPart1 = """ - > # big endian - indexSubTableArrayOffset: I - indexTablesSize: I - numberOfIndexSubTables: I - colorRef: I -""" -# The compound type for hori and vert. -sbitLineMetricsFormat = """ - > # big endian - ascender: b - descender: b - widthMax: B - caretSlopeNumerator: b - caretSlopeDenominator: b - caretOffset: b - minOriginSB: b - minAdvanceSB: b - maxBeforeBL: b - minAfterBL: b - pad1: b - pad2: b -""" -# hori and vert go between the two parts. -bitmapSizeTableFormatPart2 = """ - > # big endian - startGlyphIndex: H - endGlyphIndex: H - ppemX: B - ppemY: B - bitDepth: B - flags: b -""" - -indexSubTableArrayFormat = ">HHL" -indexSubTableArraySize = struct.calcsize(indexSubTableArrayFormat) - -indexSubHeaderFormat = ">HHL" -indexSubHeaderSize = struct.calcsize(indexSubHeaderFormat) - -codeOffsetPairFormat = ">HH" -codeOffsetPairSize = struct.calcsize(codeOffsetPairFormat) - - -class table_E_B_L_C_(DefaultTable.DefaultTable): - - dependencies = ["EBDT"] - - # This method can be overridden in subclasses to support new formats - # without changing the other implementation. Also can be used as a - # convenience method for coverting a font file to an alternative format. - def getIndexFormatClass(self, indexFormat): - return eblc_sub_table_classes[indexFormat] - - def decompile(self, data, ttFont): - - # Save the original data because offsets are from the start of the table. - origData = data - i = 0 - - dummy = sstruct.unpack(eblcHeaderFormat, data[:8], self) - i += 8 - - self.strikes = [] - for curStrikeIndex in range(self.numSizes): - curStrike = Strike() - self.strikes.append(curStrike) - curTable = curStrike.bitmapSizeTable - dummy = sstruct.unpack2( - bitmapSizeTableFormatPart1, data[i : i + 16], curTable - ) - i += 16 - for metric in ("hori", "vert"): - metricObj = SbitLineMetrics() - vars(curTable)[metric] = metricObj - dummy = sstruct.unpack2( - sbitLineMetricsFormat, data[i : i + 12], metricObj - ) - i += 12 - dummy = sstruct.unpack( - bitmapSizeTableFormatPart2, data[i : i + 8], curTable - ) - i += 8 - - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - for subtableIndex in range(curTable.numberOfIndexSubTables): - i = ( - curTable.indexSubTableArrayOffset - + subtableIndex * indexSubTableArraySize - ) - - tup = struct.unpack( - indexSubTableArrayFormat, data[i : i + indexSubTableArraySize] - ) - (firstGlyphIndex, lastGlyphIndex, additionalOffsetToIndexSubtable) = tup - i = curTable.indexSubTableArrayOffset + additionalOffsetToIndexSubtable - - tup = struct.unpack( - indexSubHeaderFormat, data[i : i + indexSubHeaderSize] - ) - (indexFormat, imageFormat, imageDataOffset) = tup - - indexFormatClass = self.getIndexFormatClass(indexFormat) - indexSubTable = indexFormatClass(data[i + indexSubHeaderSize :], ttFont) - indexSubTable.firstGlyphIndex = firstGlyphIndex - indexSubTable.lastGlyphIndex = lastGlyphIndex - indexSubTable.additionalOffsetToIndexSubtable = ( - additionalOffsetToIndexSubtable - ) - indexSubTable.indexFormat = indexFormat - indexSubTable.imageFormat = imageFormat - indexSubTable.imageDataOffset = imageDataOffset - indexSubTable.decompile() # https://github.com/fonttools/fonttools/issues/317 - curStrike.indexSubTables.append(indexSubTable) - - def compile(self, ttFont): - - dataList = [] - self.numSizes = len(self.strikes) - dataList.append(sstruct.pack(eblcHeaderFormat, self)) - - # Data size of the header + bitmapSizeTable needs to be calculated - # in order to form offsets. This value will hold the size of the data - # in dataList after all the data is consolidated in dataList. - dataSize = len(dataList[0]) - - # The table will be structured in the following order: - # (0) header - # (1) Each bitmapSizeTable [1 ... self.numSizes] - # (2) Alternate between indexSubTableArray and indexSubTable - # for each bitmapSizeTable present. - # - # The issue is maintaining the proper offsets when table information - # gets moved around. All offsets and size information must be recalculated - # when building the table to allow editing within ttLib and also allow easy - # import/export to and from XML. All of this offset information is lost - # when exporting to XML so everything must be calculated fresh so importing - # from XML will work cleanly. Only byte offset and size information is - # calculated fresh. Count information like numberOfIndexSubTables is - # checked through assertions. If the information in this table was not - # touched or was changed properly then these types of values should match. - # - # The table will be rebuilt the following way: - # (0) Precompute the size of all the bitmapSizeTables. This is needed to - # compute the offsets properly. - # (1) For each bitmapSizeTable compute the indexSubTable and - # indexSubTableArray pair. The indexSubTable must be computed first - # so that the offset information in indexSubTableArray can be - # calculated. Update the data size after each pairing. - # (2) Build each bitmapSizeTable. - # (3) Consolidate all the data into the main dataList in the correct order. - - for _ in self.strikes: - dataSize += sstruct.calcsize(bitmapSizeTableFormatPart1) - dataSize += len(("hori", "vert")) * sstruct.calcsize(sbitLineMetricsFormat) - dataSize += sstruct.calcsize(bitmapSizeTableFormatPart2) - - indexSubTablePairDataList = [] - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - curTable.numberOfIndexSubTables = len(curStrike.indexSubTables) - curTable.indexSubTableArrayOffset = dataSize - - # Precompute the size of the indexSubTableArray. This information - # is important for correctly calculating the new value for - # additionalOffsetToIndexSubtable. - sizeOfSubTableArray = ( - curTable.numberOfIndexSubTables * indexSubTableArraySize - ) - lowerBound = dataSize - dataSize += sizeOfSubTableArray - upperBound = dataSize - - indexSubTableDataList = [] - for indexSubTable in curStrike.indexSubTables: - indexSubTable.additionalOffsetToIndexSubtable = ( - dataSize - curTable.indexSubTableArrayOffset - ) - glyphIds = list(map(ttFont.getGlyphID, indexSubTable.names)) - indexSubTable.firstGlyphIndex = min(glyphIds) - indexSubTable.lastGlyphIndex = max(glyphIds) - data = indexSubTable.compile(ttFont) - indexSubTableDataList.append(data) - dataSize += len(data) - curTable.startGlyphIndex = min( - ist.firstGlyphIndex for ist in curStrike.indexSubTables - ) - curTable.endGlyphIndex = max( - ist.lastGlyphIndex for ist in curStrike.indexSubTables - ) - - for i in curStrike.indexSubTables: - data = struct.pack( - indexSubHeaderFormat, - i.firstGlyphIndex, - i.lastGlyphIndex, - i.additionalOffsetToIndexSubtable, - ) - indexSubTablePairDataList.append(data) - indexSubTablePairDataList.extend(indexSubTableDataList) - curTable.indexTablesSize = dataSize - curTable.indexSubTableArrayOffset - - for curStrike in self.strikes: - curTable = curStrike.bitmapSizeTable - data = sstruct.pack(bitmapSizeTableFormatPart1, curTable) - dataList.append(data) - for metric in ("hori", "vert"): - metricObj = vars(curTable)[metric] - data = sstruct.pack(sbitLineMetricsFormat, metricObj) - dataList.append(data) - data = sstruct.pack(bitmapSizeTableFormatPart2, curTable) - dataList.append(data) - dataList.extend(indexSubTablePairDataList) - - return bytesjoin(dataList) - - def toXML(self, writer, ttFont): - writer.simpletag("header", [("version", self.version)]) - writer.newline() - for curIndex, curStrike in enumerate(self.strikes): - curStrike.toXML(curIndex, writer, ttFont) - - def fromXML(self, name, attrs, content, ttFont): - if name == "header": - self.version = safeEval(attrs["version"]) - elif name == "strike": - if not hasattr(self, "strikes"): - self.strikes = [] - strikeIndex = safeEval(attrs["index"]) - curStrike = Strike() - curStrike.fromXML(name, attrs, content, ttFont, self) - - # Grow the strike array to the appropriate size. The XML format - # allows for the strike index value to be out of order. - if strikeIndex >= len(self.strikes): - self.strikes += [None] * (strikeIndex + 1 - len(self.strikes)) - assert self.strikes[strikeIndex] is None, "Duplicate strike EBLC indices." - self.strikes[strikeIndex] = curStrike - - -class Strike(object): - def __init__(self): - self.bitmapSizeTable = BitmapSizeTable() - self.indexSubTables = [] - - def toXML(self, strikeIndex, writer, ttFont): - writer.begintag("strike", [("index", strikeIndex)]) - writer.newline() - self.bitmapSizeTable.toXML(writer, ttFont) - writer.comment( - "GlyphIds are written but not read. The firstGlyphIndex and\nlastGlyphIndex values will be recalculated by the compiler." - ) - writer.newline() - for indexSubTable in self.indexSubTables: - indexSubTable.toXML(writer, ttFont) - writer.endtag("strike") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont, locator): - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "bitmapSizeTable": - self.bitmapSizeTable.fromXML(name, attrs, content, ttFont) - elif name.startswith(_indexSubTableSubclassPrefix): - indexFormat = safeEval(name[len(_indexSubTableSubclassPrefix) :]) - indexFormatClass = locator.getIndexFormatClass(indexFormat) - indexSubTable = indexFormatClass(None, None) - indexSubTable.indexFormat = indexFormat - indexSubTable.fromXML(name, attrs, content, ttFont) - self.indexSubTables.append(indexSubTable) - - -class BitmapSizeTable(object): - - # Returns all the simple metric names that bitmap size table - # cares about in terms of XML creation. - def _getXMLMetricNames(self): - dataNames = sstruct.getformat(bitmapSizeTableFormatPart1)[1] - dataNames = dataNames + sstruct.getformat(bitmapSizeTableFormatPart2)[1] - # Skip the first 3 data names because they are byte offsets and counts. - return dataNames[3:] - - def toXML(self, writer, ttFont): - writer.begintag("bitmapSizeTable") - writer.newline() - for metric in ("hori", "vert"): - getattr(self, metric).toXML(metric, writer, ttFont) - for metricName in self._getXMLMetricNames(): - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag("bitmapSizeTable") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - # Create a lookup for all the simple names that make sense to - # bitmap size table. Only read the information from these names. - dataNames = set(self._getXMLMetricNames()) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "sbitLineMetrics": - direction = attrs["direction"] - assert direction in ( - "hori", - "vert", - ), "SbitLineMetrics direction specified invalid." - metricObj = SbitLineMetrics() - metricObj.fromXML(name, attrs, content, ttFont) - vars(self)[direction] = metricObj - elif name in dataNames: - vars(self)[name] = safeEval(attrs["value"]) - else: - log.warning("unknown name '%s' being ignored in BitmapSizeTable.", name) - - -class SbitLineMetrics(object): - def toXML(self, name, writer, ttFont): - writer.begintag("sbitLineMetrics", [("direction", name)]) - writer.newline() - for metricName in sstruct.getformat(sbitLineMetricsFormat)[1]: - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag("sbitLineMetrics") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - metricNames = set(sstruct.getformat(sbitLineMetricsFormat)[1]) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name in metricNames: - vars(self)[name] = safeEval(attrs["value"]) - - -# Important information about the naming scheme. Used for identifying subtables. -_indexSubTableSubclassPrefix = "eblc_index_sub_table_" - - -class EblcIndexSubTable(object): - def __init__(self, data, ttFont): - self.data = data - self.ttFont = ttFont - # TODO Currently non-lazy decompiling doesn't work for this class... - # if not ttFont.lazy: - # self.decompile() - # del self.data, self.ttFont - - def __getattr__(self, attr): - # Allow lazy decompile. - if attr[:2] == "__": - raise AttributeError(attr) - if attr == "data": - raise AttributeError(attr) - self.decompile() - return getattr(self, attr) - - def ensureDecompiled(self, recurse=False): - if hasattr(self, "data"): - self.decompile() - - # This method just takes care of the indexSubHeader. Implementing subclasses - # should call it to compile the indexSubHeader and then continue compiling - # the remainder of their unique format. - def compile(self, ttFont): - return struct.pack( - indexSubHeaderFormat, - self.indexFormat, - self.imageFormat, - self.imageDataOffset, - ) - - # Creates the XML for bitmap glyphs. Each index sub table basically makes - # the same XML except for specific metric information that is written - # out via a method call that a subclass implements optionally. - def toXML(self, writer, ttFont): - writer.begintag( - self.__class__.__name__, - [ - ("imageFormat", self.imageFormat), - ("firstGlyphIndex", self.firstGlyphIndex), - ("lastGlyphIndex", self.lastGlyphIndex), - ], - ) - writer.newline() - self.writeMetrics(writer, ttFont) - # Write out the names as thats all thats needed to rebuild etc. - # For font debugging of consecutive formats the ids are also written. - # The ids are not read when moving from the XML format. - glyphIds = map(ttFont.getGlyphID, self.names) - for glyphName, glyphId in zip(self.names, glyphIds): - writer.simpletag("glyphLoc", name=glyphName, id=glyphId) - writer.newline() - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - # Read all the attributes. Even though the glyph indices are - # recalculated, they are still read in case there needs to - # be an immediate export of the data. - self.imageFormat = safeEval(attrs["imageFormat"]) - self.firstGlyphIndex = safeEval(attrs["firstGlyphIndex"]) - self.lastGlyphIndex = safeEval(attrs["lastGlyphIndex"]) - - self.readMetrics(name, attrs, content, ttFont) - - self.names = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "glyphLoc": - self.names.append(attrs["name"]) - - # A helper method that writes the metrics for the index sub table. It also - # is responsible for writing the image size for fixed size data since fixed - # size is not recalculated on compile. Default behavior is to do nothing. - def writeMetrics(self, writer, ttFont): - pass - - # A helper method that is the inverse of writeMetrics. - def readMetrics(self, name, attrs, content, ttFont): - pass - - # This method is for fixed glyph data sizes. There are formats where - # the glyph data is fixed but are actually composite glyphs. To handle - # this the font spec in indexSubTable makes the data the size of the - # fixed size by padding the component arrays. This function abstracts - # out this padding process. Input is data unpadded. Output is data - # padded only in fixed formats. Default behavior is to return the data. - def padBitmapData(self, data): - return data - - # Remove any of the glyph locations and names that are flagged as skipped. - # This only occurs in formats {1,3}. - def removeSkipGlyphs(self): - # Determines if a name, location pair is a valid data location. - # Skip glyphs are marked when the size is equal to zero. - def isValidLocation(args): - (name, (startByte, endByte)) = args - return startByte < endByte - - # Remove all skip glyphs. - dataPairs = list(filter(isValidLocation, zip(self.names, self.locations))) - self.names, self.locations = list(map(list, zip(*dataPairs))) - - -# A closure for creating a custom mixin. This is done because formats 1 and 3 -# are very similar. The only difference between them is the size per offset -# value. Code put in here should handle both cases generally. -def _createOffsetArrayIndexSubTableMixin(formatStringForDataType): - - # Prep the data size for the offset array data format. - dataFormat = ">" + formatStringForDataType - offsetDataSize = struct.calcsize(dataFormat) - - class OffsetArrayIndexSubTableMixin(object): - def decompile(self): - - numGlyphs = self.lastGlyphIndex - self.firstGlyphIndex + 1 - indexingOffsets = [ - glyphIndex * offsetDataSize for glyphIndex in range(numGlyphs + 2) - ] - indexingLocations = zip(indexingOffsets, indexingOffsets[1:]) - offsetArray = [ - struct.unpack(dataFormat, self.data[slice(*loc)])[0] - for loc in indexingLocations - ] - - glyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - modifiedOffsets = [offset + self.imageDataOffset for offset in offsetArray] - self.locations = list(zip(modifiedOffsets, modifiedOffsets[1:])) - - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - self.removeSkipGlyphs() - del self.data, self.ttFont - - def compile(self, ttFont): - # First make sure that all the data lines up properly. Formats 1 and 3 - # must have all its data lined up consecutively. If not this will fail. - for curLoc, nxtLoc in zip(self.locations, self.locations[1:]): - assert ( - curLoc[1] == nxtLoc[0] - ), "Data must be consecutive in indexSubTable offset formats" - - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Make sure that all ids are sorted strictly increasing. - assert all(glyphIds[i] < glyphIds[i + 1] for i in range(len(glyphIds) - 1)) - - # Run a simple algorithm to add skip glyphs to the data locations at - # the places where an id is not present. - idQueue = deque(glyphIds) - locQueue = deque(self.locations) - allGlyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - allLocations = [] - for curId in allGlyphIds: - if curId != idQueue[0]: - allLocations.append((locQueue[0][0], locQueue[0][0])) - else: - idQueue.popleft() - allLocations.append(locQueue.popleft()) - - # Now that all the locations are collected, pack them appropriately into - # offsets. This is the form where offset[i] is the location and - # offset[i+1]-offset[i] is the size of the data location. - offsets = list(allLocations[0]) + [loc[1] for loc in allLocations[1:]] - # Image data offset must be less than or equal to the minimum of locations. - # This offset may change the value for round tripping but is safer and - # allows imageDataOffset to not be required to be in the XML version. - self.imageDataOffset = min(offsets) - offsetArray = [offset - self.imageDataOffset for offset in offsets] - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList += [ - struct.pack(dataFormat, offsetValue) for offsetValue in offsetArray - ] - # Take care of any padding issues. Only occurs in format 3. - if offsetDataSize * len(offsetArray) % 4 != 0: - dataList.append(struct.pack(dataFormat, 0)) - return bytesjoin(dataList) - - return OffsetArrayIndexSubTableMixin - - -# A Mixin for functionality shared between the different kinds -# of fixed sized data handling. Both kinds have big metrics so -# that kind of special processing is also handled in this mixin. -class FixedSizeIndexSubTableMixin(object): - def writeMetrics(self, writer, ttFont): - writer.simpletag("imageSize", value=self.imageSize) - writer.newline() - self.metrics.toXML(writer, ttFont) - - def readMetrics(self, name, attrs, content, ttFont): - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == "imageSize": - self.imageSize = safeEval(attrs["value"]) - elif name == BigGlyphMetrics.__name__: - self.metrics = BigGlyphMetrics() - self.metrics.fromXML(name, attrs, content, ttFont) - elif name == SmallGlyphMetrics.__name__: - log.warning( - "SmallGlyphMetrics being ignored in format %d.", self.indexFormat - ) - - def padBitmapData(self, data): - # Make sure that the data isn't bigger than the fixed size. - assert len(data) <= self.imageSize, ( - "Data in indexSubTable format %d must be less than the fixed size." - % self.indexFormat - ) - # Pad the data so that it matches the fixed size. - pad = (self.imageSize - len(data)) * b"\0" - return data + pad - - -class eblc_index_sub_table_1( - _createOffsetArrayIndexSubTableMixin("L"), EblcIndexSubTable -): - pass - - -class eblc_index_sub_table_2(FixedSizeIndexSubTableMixin, EblcIndexSubTable): - def decompile(self): - (self.imageSize,) = struct.unpack(">L", self.data[:4]) - self.metrics = BigGlyphMetrics() - sstruct.unpack2(bigGlyphMetricsFormat, self.data[4:], self.metrics) - glyphIds = list(range(self.firstGlyphIndex, self.lastGlyphIndex + 1)) - offsets = [ - self.imageSize * i + self.imageDataOffset for i in range(len(glyphIds) + 1) - ] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Make sure all the ids are consecutive. This is required by Format 2. - assert glyphIds == list( - range(self.firstGlyphIndex, self.lastGlyphIndex + 1) - ), "Format 2 ids must be consecutive." - self.imageDataOffset = min(next(iter(zip(*self.locations)))) - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", self.imageSize)) - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - return bytesjoin(dataList) - - -class eblc_index_sub_table_3( - _createOffsetArrayIndexSubTableMixin("H"), EblcIndexSubTable -): - pass - - -class eblc_index_sub_table_4(EblcIndexSubTable): - def decompile(self): - - (numGlyphs,) = struct.unpack(">L", self.data[:4]) - data = self.data[4:] - indexingOffsets = [ - glyphIndex * codeOffsetPairSize for glyphIndex in range(numGlyphs + 2) - ] - indexingLocations = zip(indexingOffsets, indexingOffsets[1:]) - glyphArray = [ - struct.unpack(codeOffsetPairFormat, data[slice(*loc)]) - for loc in indexingLocations - ] - glyphIds, offsets = list(map(list, zip(*glyphArray))) - # There are one too many glyph ids. Get rid of the last one. - glyphIds.pop() - - offsets = [offset + self.imageDataOffset for offset in offsets] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - # First make sure that all the data lines up properly. Format 4 - # must have all its data lined up consecutively. If not this will fail. - for curLoc, nxtLoc in zip(self.locations, self.locations[1:]): - assert ( - curLoc[1] == nxtLoc[0] - ), "Data must be consecutive in indexSubTable format 4" - - offsets = list(self.locations[0]) + [loc[1] for loc in self.locations[1:]] - # Image data offset must be less than or equal to the minimum of locations. - # Resetting this offset may change the value for round tripping but is safer - # and allows imageDataOffset to not be required to be in the XML version. - self.imageDataOffset = min(offsets) - offsets = [offset - self.imageDataOffset for offset in offsets] - glyphIds = list(map(ttFont.getGlyphID, self.names)) - # Create an iterator over the ids plus a padding value. - idsPlusPad = list(itertools.chain(glyphIds, [0])) - - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", len(glyphIds))) - tmp = [ - struct.pack(codeOffsetPairFormat, *cop) for cop in zip(idsPlusPad, offsets) - ] - dataList += tmp - data = bytesjoin(dataList) - return data - - -class eblc_index_sub_table_5(FixedSizeIndexSubTableMixin, EblcIndexSubTable): - def decompile(self): - self.origDataLen = 0 - (self.imageSize,) = struct.unpack(">L", self.data[:4]) - data = self.data[4:] - self.metrics, data = sstruct.unpack2( - bigGlyphMetricsFormat, data, BigGlyphMetrics() - ) - (numGlyphs,) = struct.unpack(">L", data[:4]) - data = data[4:] - glyphIds = [ - struct.unpack(">H", data[2 * i : 2 * (i + 1)])[0] for i in range(numGlyphs) - ] - - offsets = [ - self.imageSize * i + self.imageDataOffset for i in range(len(glyphIds) + 1) - ] - self.locations = list(zip(offsets, offsets[1:])) - self.names = list(map(self.ttFont.getGlyphName, glyphIds)) - del self.data, self.ttFont - - def compile(self, ttFont): - self.imageDataOffset = min(next(iter(zip(*self.locations)))) - dataList = [EblcIndexSubTable.compile(self, ttFont)] - dataList.append(struct.pack(">L", self.imageSize)) - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - glyphIds = list(map(ttFont.getGlyphID, self.names)) - dataList.append(struct.pack(">L", len(glyphIds))) - dataList += [struct.pack(">H", curId) for curId in glyphIds] - if len(glyphIds) % 2 == 1: - dataList.append(struct.pack(">H", 0)) - return bytesjoin(dataList) - - -# Dictionary of indexFormat to the class representing that format. -eblc_sub_table_classes = { - 1: eblc_index_sub_table_1, - 2: eblc_index_sub_table_2, - 3: eblc_index_sub_table_3, - 4: eblc_index_sub_table_4, - 5: eblc_index_sub_table_5, -} diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/archs/arch_util.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/archs/arch_util.py deleted file mode 100644 index bad45ab34e901c47fb539152fca714a3795b0de2..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/archs/arch_util.py +++ /dev/null @@ -1,318 +0,0 @@ -import collections.abc -import math -import torch -import torchvision -import warnings -from distutils.version import LooseVersion -from itertools import repeat -from torch import nn as nn -from torch.nn import functional as F -from torch.nn import init as init -from torch.nn.modules.batchnorm import _BatchNorm - -from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv -from basicsr.utils import get_root_logger - - -@torch.no_grad() -def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs): - """Initialize network weights. - - Args: - module_list (list[nn.Module] | nn.Module): Modules to be initialized. - scale (float): Scale initialized weights, especially for residual - blocks. Default: 1. - bias_fill (float): The value to fill bias. Default: 0 - kwargs (dict): Other arguments for initialization function. - """ - if not isinstance(module_list, list): - module_list = [module_list] - for module in module_list: - for m in module.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, **kwargs) - m.weight.data *= scale - if m.bias is not None: - m.bias.data.fill_(bias_fill) - elif isinstance(m, _BatchNorm): - init.constant_(m.weight, 1) - if m.bias is not None: - m.bias.data.fill_(bias_fill) - - -def make_layer(basic_block, num_basic_block, **kwarg): - """Make layers by stacking the same blocks. - - Args: - basic_block (nn.module): nn.module class for basic block. - num_basic_block (int): number of blocks. - - Returns: - nn.Sequential: Stacked blocks in nn.Sequential. - """ - layers = [] - for _ in range(num_basic_block): - layers.append(basic_block(**kwarg)) - return nn.Sequential(*layers) - - -class ResidualBlockNoBN(nn.Module): - """Residual block without BN. - - It has a style of: - ---Conv-ReLU-Conv-+- - |________________| - - Args: - num_feat (int): Channel number of intermediate features. - Default: 64. - res_scale (float): Residual scale. Default: 1. - pytorch_init (bool): If set to True, use pytorch default init, - otherwise, use default_init_weights. Default: False. - """ - - def __init__(self, num_feat=64, res_scale=1, pytorch_init=False): - super(ResidualBlockNoBN, self).__init__() - self.res_scale = res_scale - self.conv1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=True) - self.relu = nn.ReLU(inplace=True) - - if not pytorch_init: - default_init_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = self.conv2(self.relu(self.conv1(x))) - return identity + out * self.res_scale - - -class Upsample(nn.Sequential): - """Upsample module. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(2)) - elif scale == 3: - m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1)) - m.append(nn.PixelShuffle(3)) - else: - raise ValueError(f'scale {scale} is not supported. Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True): - """Warp an image or feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2), normal value. - interp_mode (str): 'nearest' or 'bilinear'. Default: 'bilinear'. - padding_mode (str): 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Before pytorch 1.3, the default value is - align_corners=True. After pytorch 1.3, the default value is - align_corners=False. Here, we use the True as default. - - Returns: - Tensor: Warped image or feature map. - """ - assert x.size()[-2:] == flow.size()[1:3] - _, _, h, w = x.size() - # create mesh grid - grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x)) - grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 - grid.requires_grad = False - - vgrid = grid + flow - # scale grid to [-1,1] - vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0 - vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0 - vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) - output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners) - - # TODO, what if align_corners=False - return output - - -def resize_flow(flow, size_type, sizes, interp_mode='bilinear', align_corners=False): - """Resize a flow according to ratio or shape. - - Args: - flow (Tensor): Precomputed flow. shape [N, 2, H, W]. - size_type (str): 'ratio' or 'shape'. - sizes (list[int | float]): the ratio for resizing or the final output - shape. - 1) The order of ratio should be [ratio_h, ratio_w]. For - downsampling, the ratio should be smaller than 1.0 (i.e., ratio - < 1.0). For upsampling, the ratio should be larger than 1.0 (i.e., - ratio > 1.0). - 2) The order of output_size should be [out_h, out_w]. - interp_mode (str): The mode of interpolation for resizing. - Default: 'bilinear'. - align_corners (bool): Whether align corners. Default: False. - - Returns: - Tensor: Resized flow. - """ - _, _, flow_h, flow_w = flow.size() - if size_type == 'ratio': - output_h, output_w = int(flow_h * sizes[0]), int(flow_w * sizes[1]) - elif size_type == 'shape': - output_h, output_w = sizes[0], sizes[1] - else: - raise ValueError(f'Size type should be ratio or shape, but got type {size_type}.') - - input_flow = flow.clone() - ratio_h = output_h / flow_h - ratio_w = output_w / flow_w - input_flow[:, 0, :, :] *= ratio_w - input_flow[:, 1, :, :] *= ratio_h - resized_flow = F.interpolate( - input=input_flow, size=(output_h, output_w), mode=interp_mode, align_corners=align_corners) - return resized_flow - - -# TODO: may write a cpp file -def pixel_unshuffle(x, scale): - """ Pixel unshuffle. - - Args: - x (Tensor): Input feature with shape (b, c, hh, hw). - scale (int): Downsample ratio. - - Returns: - Tensor: the pixel unshuffled feature. - """ - b, c, hh, hw = x.size() - out_channel = c * (scale**2) - assert hh % scale == 0 and hw % scale == 0 - h = hh // scale - w = hw // scale - x_view = x.view(b, c, h, scale, w, scale) - return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w) - - -class DCNv2Pack(ModulatedDeformConvPack): - """Modulated deformable conv for deformable alignment. - - Different from the official DCNv2Pack, which generates offsets and masks - from the preceding features, this DCNv2Pack takes another different - features to generate offsets and masks. - - Ref: - Delving Deep into Deformable Alignment in Video Super-Resolution. - """ - - def forward(self, x, feat): - out = self.conv_offset(feat) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - - offset_absmean = torch.mean(torch.abs(offset)) - if offset_absmean > 50: - logger = get_root_logger() - logger.warning(f'Offset abs mean is {offset_absmean}, larger than 50.') - - if LooseVersion(torchvision.__version__) >= LooseVersion('0.9.0'): - return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding, - self.dilation, mask) - else: - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups, self.deformable_groups) - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - low = norm_cdf((a - mean) / std) - up = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [low, up], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * low - 1, 2 * up - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - - The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -# From PyTorch -def _ntuple(n): - - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = _ntuple \ No newline at end of file diff --git a/spaces/leilevy/bingo/src/lib/bots/bing/types.ts b/spaces/leilevy/bingo/src/lib/bots/bing/types.ts deleted file mode 100644 index 0c8190b40841a04e1ead25f5cfee4dda7acd3361..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,260 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/gallery/script.py b/spaces/leogabraneth/text-generation-webui-main/extensions/gallery/script.py deleted file mode 100644 index 611a11f4a89d048ee9d0095f315391f53676f413..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/gallery/script.py +++ /dev/null @@ -1,101 +0,0 @@ -from pathlib import Path - -import gradio as gr - -from modules.html_generator import get_image_cache -from modules.shared import gradio - - -def generate_css(): - css = """ - .character-gallery > .gallery { - margin: 1rem 0; - display: grid !important; - grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); - grid-column-gap: 0.4rem; - grid-row-gap: 1.2rem; - } - - .character-gallery > .label { - display: none !important; - } - - .character-gallery button.gallery-item { - display: contents; - } - - .character-container { - cursor: pointer; - text-align: center; - position: relative; - opacity: 0.85; - } - - .character-container:hover { - opacity: 1; - } - - .character-container .placeholder, .character-container img { - width: 150px; - height: 200px; - background-color: gray; - object-fit: cover; - margin: 0 auto; - border-radius: 1rem; - border: 3px solid white; - box-shadow: 3px 3px 6px 0px rgb(0 0 0 / 50%); - } - - .character-name { - margin-top: 0.3rem; - display: block; - font-size: 1.2rem; - font-weight: 600; - overflow-wrap: anywhere; - } - """ - return css - - -def generate_html(): - cards = [] - # Iterate through files in image folder - for file in sorted(Path("characters").glob("*")): - if file.suffix in [".json", ".yml", ".yaml"]: - character = file.stem - container_html = '
            ' - image_html = "
            " - - for path in [Path(f"characters/{character}.{extension}") for extension in ['png', 'jpg', 'jpeg']]: - if path.exists(): - image_html = f'' - break - - container_html += f'{image_html} {character}' - container_html += "
            " - cards.append([container_html, character]) - - return cards - - -def select_character(evt: gr.SelectData): - return (evt.value[1]) - - -def custom_js(): - path_to_js = Path(__file__).parent.resolve() / 'script.js' - return open(path_to_js, 'r').read() - - -def ui(): - with gr.Accordion("Character gallery", open=False, elem_id='gallery-extension'): - update = gr.Button("Refresh") - gr.HTML(value="") - gallery = gr.Dataset(components=[gr.HTML(visible=False)], - label="", - samples=generate_html(), - elem_classes=["character-gallery"], - samples_per_page=50 - ) - update.click(generate_html, [], gallery) - gallery.select(select_character, None, gradio['character_menu']) diff --git a/spaces/lgaleana/toolkit/ai/image.py b/spaces/lgaleana/toolkit/ai/image.py deleted file mode 100644 index 640f9ac739fd8ef3579922e26b83ed9d21e5a78c..0000000000000000000000000000000000000000 --- a/spaces/lgaleana/toolkit/ai/image.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -from typing import Any, Dict, List - -import openai -from dotenv import load_dotenv - -load_dotenv() - - -openai.api_key = os.environ["OPENAI_KEY_PERSONAL"] - - -def gen(prompt: str, n: int, size: str) -> Dict[str, Any]: - return openai.Image.create(prompt=prompt, n=n, size=size) # type: ignore - - -def urls(prompt: str, n: int = 1, size: str = "512x512") -> List[str]: - images = gen(prompt, n, size) - return [i["url"] for i in images["data"]] # type: ignore diff --git a/spaces/liliyRehtina/color/utils/__init__.py b/spaces/liliyRehtina/color/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/limcheekin/deepseek-coder-6.7B-instruct-GGUF/Dockerfile b/spaces/limcheekin/deepseek-coder-6.7B-instruct-GGUF/Dockerfile deleted file mode 100644 index e46d91ecd52accfdcd90e3e4460edb96456d3a8f..0000000000000000000000000000000000000000 --- a/spaces/limcheekin/deepseek-coder-6.7B-instruct-GGUF/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -# Grab a fresh copy of the Python image -FROM python:3.11-slim - -# Install build and runtime dependencies -RUN apt-get update && \ - apt-get install -y \ - libopenblas-dev \ - ninja-build \ - build-essential \ - pkg-config \ - curl - -RUN pip install -U pip setuptools wheel && \ - CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install --verbose llama-cpp-python[server] - -# Download model -RUN mkdir model && \ - curl -L https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF/resolve/main/deepseek-coder-6.7b-instruct.Q4_K_M.gguf -o model/gguf-model.bin - -COPY ./start_server.sh ./ -COPY ./main.py ./ -COPY ./index.html ./ - -# Make the server start script executable -RUN chmod +x ./start_server.sh - -# Set environment variable for the host -ENV HOST=0.0.0.0 -ENV PORT=7860 - -# Expose a port for the server -EXPOSE ${PORT} - -# Run the server start script -CMD ["/bin/sh", "./start_server.sh"] \ No newline at end of file diff --git a/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_swin-s_1x_coco/mask_rcnn_swin-s-p4-w7_fpn_ms-crop_1x_coco.py b/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_swin-s_1x_coco/mask_rcnn_swin-s-p4-w7_fpn_ms-crop_1x_coco.py deleted file mode 100644 index 1fa4a39c49101afcbf61d3923531a75d7311ad05..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/finetune/finetune_mask-rcnn_swin-s_1x_coco/mask_rcnn_swin-s-p4-w7_fpn_ms-crop_1x_coco.py +++ /dev/null @@ -1,356 +0,0 @@ -model = dict( - type='MaskRCNN', - backbone=dict( - type='SwinTransformer', - embed_dims=96, - depths=[2, 2, 18, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - patch_norm=True, - out_indices=(0, 1, 2, 3), - with_cp=False, - convert_weights=True, - init_cfg=dict( - type='Pretrained', - checkpoint= - 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth' - )), - neck=dict( - type='FPN', - in_channels=[96, 192, 384, 768], - out_channels=256, - num_outs=5, - norm_cfg=dict(type='SyncBN', requires_grad=True)), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0.0, 0.0, 0.0, 0.0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_train2017.json', - img_prefix='data/coco/train2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict( - type='AutoAugment', - policies=[[{ - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }], - [{ - 'type': 'Resize', - 'img_scale': [(400, 1333), (500, 1333), - (600, 1333)], - 'multiscale_mode': 'value', - 'keep_ratio': True - }, { - 'type': 'RandomCrop', - 'crop_type': 'absolute_range', - 'crop_size': (384, 600), - 'allow_negative_crop': True - }, { - 'type': - 'Resize', - 'img_scale': [(480, 1333), (512, 1333), - (544, 1333), (576, 1333), - (608, 1333), (640, 1333), - (672, 1333), (704, 1333), - (736, 1333), (768, 1333), - (800, 1333)], - 'multiscale_mode': - 'value', - 'override': - True, - 'keep_ratio': - True - }]]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']) - ]), - val=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='CocoDataset', - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict(metric=['bbox', 'segm'], save_best='auto') -optimizer = dict( - type='AdamW', - lr=0.0001, - betas=(0.9, 0.999), - weight_decay=0.01, - paramwise_cfg=dict( - custom_keys=dict( - absolute_pos_embed=dict(decay_mult=0.0), - relative_position_bias_table=dict(decay_mult=0.0), - norm=dict(decay_mult=0.0)))) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='NumClassCheckHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='finetune'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = 'work_dirs/selfsup_mask-rcnn_swin-s_mstrain-soft-teacher_sampler-4096_temp0.5/final_model.pth' -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = None -norm_cfg = dict(type='SyncBN', requires_grad=True) -pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth' -work_dir = 'work_dirs/finetune_mask-rcnn_swin-s_1x_coco' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Formation Elephorm Photoshop Cs6 Torrent Desperados Imovie Pe.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Formation Elephorm Photoshop Cs6 Torrent Desperados Imovie Pe.md deleted file mode 100644 index 9402b15b0607eb9258076f5f4e946e766442c2a7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Formation Elephorm Photoshop Cs6 Torrent Desperados Imovie Pe.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Formation Elephorm Photoshop Cs6 Torrent desperados imovie pe


            Download Zip ✑ ✑ ✑ https://bytlly.com/2uGwPV



            -
            -. Depending on the purpose, design and shape, A. is distinguished for manual and mechanized welding in various spatial positions, for arc and semi-arc welding in shielding gases, for submerged arc welding and in shielding gases, for consumable and non-consumable welding 8a78ff9644
            -
            -
            -

            diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Grau Gmbh Video Repair Tool Keygen 17.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Grau Gmbh Video Repair Tool Keygen 17.md deleted file mode 100644 index 8278b36d5d07b1790219988bcfc4c281c12f8916..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Grau Gmbh Video Repair Tool Keygen 17.md +++ /dev/null @@ -1,12 +0,0 @@ -

            Grau Gmbh Video Repair Tool Keygen 17


            DOWNLOAD ✯✯✯ https://bytlly.com/2uGweE



            -
            -.0.30.4727 - -Generate the required key for this system using the following steps:1. Start windows update and go to the “software” tab.2. In the list of updates, find the update which is entitled “Microsoft Security Essentials” and click on the update to start downloading it.3. After the download is completed, close the update and click on the checkmark to stop the download.4. Right click on the “Microsoft Security Essentials” update and select “Uninstall”.5. After the uninstall is completed, close the window and click on the checkmark to stop the uninstallation process.6. Right click on the “Microsoft Security Essentials” folder and select “Send To -> Desktop”.7. Paste the file on the desktop and double click on it to start the key generation process.8. After the generation process is completed, you have to close the file and run the installer.9. The installation process will start and after the installation process is completed, you have to restart your system.Note: If you want to use the Generator for more than one system, then you can create a folder for the software and save the file there. Also, the software is not meant for other operating systems. Once the software is not being used, the tool can be deleted. - -Video drivers for motherboard are an essential part of hardware operation for different applications such as playing games or running different types of applications. It is due to the performance of hardware which is different from the performance of software. You have to choose the right type of video drivers for your motherboard in order to get the best performance. Your motherboard may include one or more video card and you have to choose the right video driver for your motherboard. If you have multiple video cards, then you have to choose the correct driver for your video cards. There are different types of video drivers available for different operating systems. - -Most of the systems that are sold in the market are loaded with various types of video drivers. You may not be able to use all of them and you have to choose the right video driver for your operating system. In most of the cases, you have to run your system without using the drivers. However, it is not a good idea because you may have to uninstall the drivers in order to get access to all the functions. The best way to run your system is to use the Windows drivers but they may not be compatible with 4fefd39f24
            -
            -
            -

            diff --git a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/util/time_counter.py b/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/util/time_counter.py deleted file mode 100644 index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000 --- a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/util/time_counter.py +++ /dev/null @@ -1,62 +0,0 @@ -import json -import time - - -class TimeCounter: - def __init__(self) -> None: - pass - - def clear(self): - self.timedict = {} - self.basetime = time.perf_counter() - - def timeit(self, name): - nowtime = time.perf_counter() - self.basetime - self.timedict[name] = nowtime - self.basetime = time.perf_counter() - - -class TimeHolder: - def __init__(self) -> None: - self.timedict = {} - - def update(self, _timedict: dict): - for k, v in _timedict.items(): - if k not in self.timedict: - self.timedict[k] = AverageMeter(name=k, val_only=True) - self.timedict[k].update(val=v) - - def final_res(self): - return {k: v.avg for k, v in self.timedict.items()} - - def __str__(self): - return json.dumps(self.final_res(), indent=2) - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self, name, fmt=":f", val_only=False): - self.name = name - self.fmt = fmt - self.val_only = val_only - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - if self.val_only: - fmtstr = "{name} {val" + self.fmt + "}" - else: - fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})" - return fmtstr.format(**self.__dict__) diff --git a/spaces/logasja/LowKey/align/get_nets.py b/spaces/logasja/LowKey/align/get_nets.py deleted file mode 100644 index ad1e24fe952bdf7f63348c49835500abe1983293..0000000000000000000000000000000000000000 --- a/spaces/logasja/LowKey/align/get_nets.py +++ /dev/null @@ -1,169 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from collections import OrderedDict -import numpy as np - - -class Flatten(nn.Module): - - def __init__(self): - super(Flatten, self).__init__() - - def forward(self, x): - """ - Arguments: - x: a float tensor with shape [batch_size, c, h, w]. - Returns: - a float tensor with shape [batch_size, c*h*w]. - """ - - # without this pretrained model isn't working - x = x.transpose(3, 2).contiguous() - - return x.view(x.size(0), -1) - - -class PNet(nn.Module): - - def __init__(self): - - super(PNet, self).__init__() - - # suppose we have input with size HxW, then - # after first layer: H - 2, - # after pool: ceil((H - 2)/2), - # after second conv: ceil((H - 2)/2) - 2, - # after last conv: ceil((H - 2)/2) - 4, - # and the same for W - - self.features = nn.Sequential(OrderedDict([ - ('conv1', nn.Conv2d(3, 10, 3, 1)), - ('prelu1', nn.PReLU(10)), - ('pool1', nn.MaxPool2d(2, 2, ceil_mode = True)), - - ('conv2', nn.Conv2d(10, 16, 3, 1)), - ('prelu2', nn.PReLU(16)), - - ('conv3', nn.Conv2d(16, 32, 3, 1)), - ('prelu3', nn.PReLU(32)) - ])) - - self.conv4_1 = nn.Conv2d(32, 2, 1, 1) - self.conv4_2 = nn.Conv2d(32, 4, 1, 1) - - weights = np.load("align/pnet.npy", allow_pickle=True)[()] - for n, p in self.named_parameters(): - p.data = torch.FloatTensor(weights[n]) - - def forward(self, x): - """ - Arguments: - x: a float tensor with shape [batch_size, 3, h, w]. - Returns: - b: a float tensor with shape [batch_size, 4, h', w']. - a: a float tensor with shape [batch_size, 2, h', w']. - """ - x = self.features(x) - a = self.conv4_1(x) - b = self.conv4_2(x) - a = F.softmax(a) - return b, a - - -class RNet(nn.Module): - - def __init__(self): - - super(RNet, self).__init__() - - self.features = nn.Sequential(OrderedDict([ - ('conv1', nn.Conv2d(3, 28, 3, 1)), - ('prelu1', nn.PReLU(28)), - ('pool1', nn.MaxPool2d(3, 2, ceil_mode = True)), - - ('conv2', nn.Conv2d(28, 48, 3, 1)), - ('prelu2', nn.PReLU(48)), - ('pool2', nn.MaxPool2d(3, 2, ceil_mode = True)), - - ('conv3', nn.Conv2d(48, 64, 2, 1)), - ('prelu3', nn.PReLU(64)), - - ('flatten', Flatten()), - ('conv4', nn.Linear(576, 128)), - ('prelu4', nn.PReLU(128)) - ])) - - self.conv5_1 = nn.Linear(128, 2) - self.conv5_2 = nn.Linear(128, 4) - - weights = np.load("align/rnet.npy", allow_pickle=True)[()] - for n, p in self.named_parameters(): - p.data = torch.FloatTensor(weights[n]) - - def forward(self, x): - """ - Arguments: - x: a float tensor with shape [batch_size, 3, h, w]. - Returns: - b: a float tensor with shape [batch_size, 4]. - a: a float tensor with shape [batch_size, 2]. - """ - x = self.features(x) - a = self.conv5_1(x) - b = self.conv5_2(x) - a = F.softmax(a) - return b, a - - -class ONet(nn.Module): - - def __init__(self): - - super(ONet, self).__init__() - - self.features = nn.Sequential(OrderedDict([ - ('conv1', nn.Conv2d(3, 32, 3, 1)), - ('prelu1', nn.PReLU(32)), - ('pool1', nn.MaxPool2d(3, 2, ceil_mode = True)), - - ('conv2', nn.Conv2d(32, 64, 3, 1)), - ('prelu2', nn.PReLU(64)), - ('pool2', nn.MaxPool2d(3, 2, ceil_mode = True)), - - ('conv3', nn.Conv2d(64, 64, 3, 1)), - ('prelu3', nn.PReLU(64)), - ('pool3', nn.MaxPool2d(2, 2, ceil_mode = True)), - - ('conv4', nn.Conv2d(64, 128, 2, 1)), - ('prelu4', nn.PReLU(128)), - - ('flatten', Flatten()), - ('conv5', nn.Linear(1152, 256)), - ('drop5', nn.Dropout(0.25)), - ('prelu5', nn.PReLU(256)), - ])) - - self.conv6_1 = nn.Linear(256, 2) - self.conv6_2 = nn.Linear(256, 4) - self.conv6_3 = nn.Linear(256, 10) - - weights = np.load("align/onet.npy", allow_pickle=True)[()] - for n, p in self.named_parameters(): - p.data = torch.FloatTensor(weights[n]) - - def forward(self, x): - """ - Arguments: - x: a float tensor with shape [batch_size, 3, h, w]. - Returns: - c: a float tensor with shape [batch_size, 10]. - b: a float tensor with shape [batch_size, 4]. - a: a float tensor with shape [batch_size, 2]. - """ - x = self.features(x) - a = self.conv6_1(x) - b = self.conv6_2(x) - c = self.conv6_3(x) - a = F.softmax(a) - return c, b, a \ No newline at end of file diff --git a/spaces/lojban/text-to-speech/vits/text/__init__.py b/spaces/lojban/text-to-speech/vits/text/__init__.py deleted file mode 100644 index e731fb9242adb30dea302e03b07eb2d0e2f7fb30..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/text/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from vits.text import cleaners -from vits.text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - - for symbol in clean_text: - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/lqinyli/ali/README.md b/spaces/lqinyli/ali/README.md deleted file mode 100644 index 34e9fc44d7d9be05b5621deb5f46d4f27b38e226..0000000000000000000000000000000000000000 --- a/spaces/lqinyli/ali/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Alist -emoji: 🦀 -colorFrom: red -colorTo: pink -sdk: docker -pinned: false -license: agpl-3.0 -app_port: 5244 -duplicated_from: xadssa/Alist ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/luca-martial/neural-style-transfer/README.md b/spaces/luca-martial/neural-style-transfer/README.md deleted file mode 100644 index 7b17136e9f6335e8d2e7da04218600ba56b3a821..0000000000000000000000000000000000000000 --- a/spaces/luca-martial/neural-style-transfer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Neural Style Transfer -emoji: 💩 -colorFrom: pink -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/luisoala/glide-test/glide_text2im/clip/encoders.py b/spaces/luisoala/glide-test/glide_text2im/clip/encoders.py deleted file mode 100644 index ee72773c2c891d2dda6d02933e88599b5330b052..0000000000000000000000000000000000000000 --- a/spaces/luisoala/glide-test/glide_text2im/clip/encoders.py +++ /dev/null @@ -1,497 +0,0 @@ -import math -from collections import OrderedDict -from typing import List, Optional, Tuple, cast - -import attr -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .attention import ( - AttentionInfo, - DenseAttentionMask, - DenseCausalAttentionMask, - make_full_layout, - to_attention_info, -) -from .utils import Affine, LayerNorm, zero_key_bias_grad - -# Constants used in the original CLIP implementation. -image_channel_means = [122.77093945, 116.74601272, 104.09373519] -image_channel_stds = [68.50053285, 66.63215831, 70.32316309] - - -@attr.s(eq=False, repr=False) -class TextEmbedding(nn.Module): - n_vocab: int = attr.ib() - n_context: int = attr.ib() - n_state: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - w_voc = torch.empty((self.n_vocab, self.n_state), dtype=torch.float32, device=self.device) - w_pos = torch.empty((self.n_context, self.n_state), dtype=torch.float32, device=self.device) - - with torch.no_grad(): - w_voc.normal_(std=0.02) - w_pos.normal_(std=0.01) - - self.w_voc = nn.Parameter(w_voc) - self.w_pos = nn.Parameter(w_pos) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - if len(x.shape) != 2: - raise ValueError() - - return F.embedding(x, self.w_voc) + self.w_pos[None, :, :] - - -@attr.s(eq=False, repr=False) -class ImageEmbedding(nn.Module): - image_size: int = attr.ib() - patch_size: int = attr.ib() - n_state: int = attr.ib() - n_timestep: int = attr.ib(default=0) - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - if self.image_size % self.patch_size != 0: - raise ValueError() - - n_patch = self.image_size // self.patch_size - patch_proj = torch.empty( - (self.n_state, 3) + 2 * (self.patch_size,), dtype=torch.float32, device=self.device - ) - w_pos = torch.empty( - (1 + n_patch ** 2, self.n_state), dtype=torch.float32, device=self.device - ) - - with torch.no_grad(): - if self.n_timestep == 0: - pred_state = torch.empty((self.n_state,), dtype=torch.float32, device=self.device) - pred_state.normal_(std=1 / np.sqrt(self.n_state)) - self.pred_state = nn.Parameter(pred_state) - else: - w_t = torch.empty( - (self.n_timestep, self.n_state), dtype=torch.float32, device=self.device - ) - w_t.normal_(std=1 / np.sqrt(self.n_state)) - self.w_t = nn.Parameter(w_t) - - patch_proj.normal_(std=np.sqrt(2 / (self.n_state * self.patch_size ** 2))) - w_pos.normal_(std=1 / np.sqrt(self.n_state)) - - self.patch_proj = nn.Parameter(patch_proj) - self.w_pos = nn.Parameter(w_pos) - - self.channel_means = torch.tensor( - image_channel_means, dtype=torch.float32, device=self.device - )[None, :, None, None] - self.channel_stds = torch.tensor( - image_channel_stds, dtype=torch.float32, device=self.device - )[None, :, None, None] - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - - def forward(self, x: torch.Tensor, t: Optional[torch.Tensor] = None) -> torch.Tensor: - if len(x.shape) != 4: - raise ValueError("input should be 4d") - if x.shape[1] != 3: - raise ValueError("input should have 3 channels") - if not (x.shape[2] == self.image_size and x.shape[3] == self.image_size): - raise ValueError(f"input is not {self.image_size} x {self.image_size}") - - if (self.n_timestep == 0 and t is not None) or (self.n_timestep != 0 and t is None): - raise ValueError() - if self.n_timestep != 0: - assert t is not None - if len(t.shape) != 1: - raise ValueError() - if t.shape[0] != x.shape[0]: - raise ValueError() - - x = (x - self.channel_means) / self.channel_stds - x = F.conv2d(x, self.patch_proj, stride=self.patch_size) - x = x.reshape(x.shape[0], self.n_state, (self.image_size // self.patch_size) ** 2).permute( - 0, 2, 1 - ) - - sot = ( - self.pred_state[None, None].expand(x.shape[0], -1, -1) - if self.n_timestep == 0 - else F.embedding(cast(torch.Tensor, t), self.w_t)[:, None] - ) - x = torch.cat((sot, x), dim=1) + self.w_pos[None] - return self.ln(x) - - -@attr.s(eq=False, repr=False) -class AttentionResblock(nn.Module): - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - attn_fn: AttentionInfo = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_head_state = self.n_state // self.attn_fn.n_heads - self.qk_scale = 1 / np.sqrt(self.n_head_state) - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f_q = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=True, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_k = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=False, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_v = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=True, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_c = Affine( - self.n_state, - self.n_state, - use_bias=True, - std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2), - device=self.device, - ) # XXX - - def forward(self, m: torch.Tensor) -> torch.Tensor: - n_context = m.shape[1] - n_query_pad = self.attn_fn.ctx_blks_q * self.attn_fn.block_size - n_context - n_key_pad = self.attn_fn.ctx_blks_k * self.attn_fn.block_size - n_context - assert n_query_pad >= 0 - assert n_key_pad >= 0 - - r = m - r = self.ln(r) - q, k, v = self.f_q(r), self.f_k(r), self.f_v(r) - - if n_query_pad != 0: - q = F.pad(q, (0, 0, 0, n_query_pad)) - - if n_key_pad != 0: - k = F.pad(k, (0, 0, 0, n_key_pad)) - v = F.pad(v, (0, 0, 0, n_key_pad)) - - q = q.view([q.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - k = k.view([k.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - v = v.view([v.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - w = torch.einsum( - "bhcd,bhkd->bhck", q * math.sqrt(self.qk_scale), k * math.sqrt(self.qk_scale) - ) - - if hasattr(self.attn_fn, "pytorch_attn_bias"): - bias = self.attn_fn.pytorch_attn_bias - assert len(bias.shape) in {2, 3} - - if len(bias.shape) == 2: - w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None, None], dim=-1) - elif len(bias.shape) == 3: - w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None], dim=-1) - else: - w = torch.softmax(w, dim=-1) - - r = torch.einsum("bhck,bhkd->bhcd", w, v) - r = r.permute((0, 2, 1, 3)).reshape((r.shape[0], -1, self.n_state)) - - if n_query_pad != 0: - r = r[:, :-n_query_pad] - - assert r.shape[1] == n_context - - r = self.f_c(r) - return m + r - - -@attr.s(eq=False, repr=False) -class FullyConnectedResblock(nn.Module): - """ - Not imported from other files because we retain Alec's original inits. - """ - - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f_1 = Affine( - self.n_state, - 4 * self.n_state, - use_bias=True, - std=np.sqrt(2 / (4 * self.n_state)), - device=self.device, - ) - self.f_2 = Affine( - 4 * self.n_state, - self.n_state, - use_bias=True, - std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2), - device=self.device, - ) # XXX - - def forward(self, m: torch.Tensor) -> torch.Tensor: - r = m - r = self.ln(r) - - r = self.f_2(F.gelu(self.f_1(r))) - return m + r - - -@attr.s(eq=False, repr=False) -class TransformerBlock(nn.Module): - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - attn_fn: AttentionInfo = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.f_attn = AttentionResblock( - self.n_state, - self.n_resblocks, - self.attn_fn, - self.device, - ) - self.f_mlp = FullyConnectedResblock(self.n_state, self.n_resblocks, self.device) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.f_mlp(self.f_attn(x)) - - -@attr.s(eq=False, repr=False) -class TextFeatureExtractor(nn.Module): - n_state: int = attr.ib() - n_embd: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device) - - def forward( - self, text: torch.Tensor, text_len: torch.Tensor, return_probe_features: bool = False - ) -> torch.Tensor: - if len(text.shape) != 3: - raise ValueError("expected text to be 3d") - if len(text_len.shape) != 1: - raise ValueError("expected text length to be 1d") - if text.shape[0] != text_len.shape[0]: - raise ValueError("text and text_len have inconsistent batch dimensions") - - index = (text_len - 1)[:, None, None].expand(-1, 1, text.shape[2]) - x = torch.gather(text, dim=1, index=index) - assert list(x.shape) == [text.shape[0], 1, text.shape[2]] - - if return_probe_features: - return x[:, 0] - - x = self.ln(x) - return self.f(x[:, 0]) - - -@attr.s(eq=False, repr=False) -class ImageFeatureExtractor(nn.Module): - n_state: int = attr.ib() - n_embd: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device) - - def forward(self, x: torch.Tensor, return_probe_features: bool = False) -> torch.Tensor: - if return_probe_features: - return x[:, 0] - - x = self.ln(x[:, :1]) - return self.f(x[:, 0]) - - -@attr.s(eq=False, repr=False) -class TextEncoder(nn.Module): - n_bpe_vocab: int = attr.ib() - max_text_len: int = attr.ib() - n_embd: int = attr.ib() - n_head: int = attr.ib() - n_xf_blocks: int = attr.ib() - n_head_state: int = attr.ib(default=64) - device: torch.device = attr.ib(default=torch.device("cuda")) - block_size: int = attr.ib(init=False, default=32) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_state = self.n_head * self.n_head_state - n_rounded_context = self.block_size * int(math.ceil(self.max_text_len / self.block_size)) - n_pad = n_rounded_context - self.max_text_len - - args = ( - n_rounded_context, - n_rounded_context, - self.block_size, - self.n_head, - False, - n_pad, - n_pad, - ) - mask = DenseCausalAttentionMask(*args) - attn_fn = to_attention_info(mask) - - m = 1 - make_full_layout(mask).astype(np.float32) - m[m == 1] = -1e10 - attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device) - - blocks: List[Tuple[str, nn.Module]] = [ - ( - "input", - TextEmbedding( - self.n_bpe_vocab, self.max_text_len, self.n_state, device=self.device - ), - ) - ] - - for i in range(self.n_xf_blocks): - blocks.append( - ( - f"block_{i}", - TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device), - ) - ) - - blocks.append( - ("output", TextFeatureExtractor(self.n_state, self.n_embd, device=self.device)) - ) - - self.blocks = nn.ModuleDict(OrderedDict(blocks)) - - def forward( - self, - text: torch.Tensor, - text_len: torch.Tensor, - return_probe_features: bool = False, - ) -> torch.Tensor: - - n_batch = text.shape[0] - h = self.blocks["input"](text) - - for i in range(self.n_xf_blocks): - h = self.blocks[f"block_{i}"](h) - - h = self.blocks["output"](h, text_len, return_probe_features=return_probe_features) - - assert list(h.shape) == [ - n_batch, - self.n_embd if not return_probe_features else self.n_state, - ] - return h - - -@attr.s(eq=False, repr=False) -class ImageEncoder(nn.Module): - image_size: int = attr.ib() - patch_size: int = attr.ib() - n_embd: int = attr.ib() - n_head: int = attr.ib() - n_xf_blocks: int = attr.ib() - n_head_state: int = attr.ib(default=64) - n_timestep: int = attr.ib(default=0) - device: torch.device = attr.ib(default=torch.device("cuda")) - block_size: int = attr.ib(init=False, default=32) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_state = self.n_head * self.n_head_state - self.n_context = 1 + (self.image_size // self.patch_size) ** 2 - n_rounded_context = self.block_size * int(math.ceil(self.n_context / self.block_size)) - n_pad = n_rounded_context - self.n_context - - args = ( - n_rounded_context, - n_rounded_context, - self.block_size, - self.n_head, - False, - n_pad, - n_pad, - ) - mask = DenseAttentionMask(*args) - attn_fn = to_attention_info(mask) - - m = 1 - make_full_layout(mask).astype(np.float32) - m[m == 1] = -1e10 - attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device) - - blocks: List[Tuple[str, nn.Module]] = [ - ( - "input", - ImageEmbedding( - self.image_size, - self.patch_size, - self.n_state, - n_timestep=self.n_timestep, - device=self.device, - ), - ) - ] - - for i in range(self.n_xf_blocks): - blocks.append( - ( - f"block_{i}", - TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device), - ) - ) - - blocks.append(("output", ImageFeatureExtractor(self.n_state, self.n_embd, self.device))) - - self.blocks = nn.ModuleDict(OrderedDict(blocks)) - - def forward( - self, - image: torch.Tensor, - timesteps: Optional[torch.Tensor] = None, - return_probe_features: bool = False, - ) -> torch.Tensor: - n_batch = image.shape[0] - h = self.blocks["input"](image, t=timesteps) - - for i in range(self.n_xf_blocks): - h = self.blocks[f"block_{i}"](h) - - h = self.blocks["output"](h, return_probe_features=return_probe_features) - - assert list(h.shape) == [ - n_batch, - self.n_embd if not return_probe_features else self.n_state, - ] - - return h diff --git a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp b/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp deleted file mode 100644 index 43d0b6783a5b512b55815a291fcac2bebeea31e0..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp +++ /dev/null @@ -1,24 +0,0 @@ -// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/cub/cmake/cub-config.cmake b/spaces/ma-xu/LIVE/thrust/dependencies/cub/cub/cmake/cub-config.cmake deleted file mode 100644 index 0900becd8fbcff9ee791c9b990ed2bf82e26f220..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/cub/cmake/cub-config.cmake +++ /dev/null @@ -1,62 +0,0 @@ -# -# find_package(CUB) config file. -# -# Defines a CUB::CUB target that may be linked from user projects to include -# CUB. - -if (TARGET CUB::CUB) - return() -endif() - -function(_cub_declare_interface_alias alias_name ugly_name) - # 1) Only IMPORTED and ALIAS targets can be placed in a namespace. - # 2) When an IMPORTED library is linked to another target, its include - # directories are treated as SYSTEM includes. - # 3) nvcc will automatically check the CUDA Toolkit include path *before* the - # system includes. This means that the Toolkit CUB will *always* be used - # during compilation, and the include paths of an IMPORTED CUB::CUB - # target will never have any effect. - # 4) This behavior can be fixed by setting the property NO_SYSTEM_FROM_IMPORTED - # on EVERY target that links to CUB::CUB. This would be a burden and a - # footgun for our users. Forgetting this would silently pull in the wrong CUB! - # 5) A workaround is to make a non-IMPORTED library outside of the namespace, - # configure it, and then ALIAS it into the namespace (or ALIAS and then - # configure, that seems to work too). - add_library(${ugly_name} INTERFACE) - add_library(${alias_name} ALIAS ${ugly_name}) -endfunction() - -# -# Setup targets -# - -_cub_declare_interface_alias(CUB::CUB _CUB_CUB) -# Strip out the 'cub/cmake/' from 'cub/cmake/cub-config.cmake': -get_filename_component(_CUB_INCLUDE_DIR "../.." ABSOLUTE BASE_DIR "${CMAKE_CURRENT_LIST_DIR}") -target_include_directories(_CUB_CUB INTERFACE "${_CUB_INCLUDE_DIR}") - -if (CUB_IGNORE_DEPRECATED_CPP_DIALECT OR - THRUST_IGNORE_DEPRECATED_CPP_DIALECT) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_CPP_DIALECT") -endif() - -if (CUB_IGNORE_DEPRECATED_CPP_11 OR - THRUST_IGNORE_DEPRECATED_CPP_11) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_CPP_11") -endif() - -if (CUB_IGNORE_DEPRECATED_COMPILER OR - THRUST_IGNORE_DEPRECATED_COMPILER) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_COMPILER") -endif() - -# -# Standardize version info -# - -set(CUB_VERSION ${${CMAKE_FIND_PACKAGE_NAME}_VERSION} CACHE INTERNAL "") -set(CUB_VERSION_MAJOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MAJOR} CACHE INTERNAL "") -set(CUB_VERSION_MINOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MINOR} CACHE INTERNAL "") -set(CUB_VERSION_PATCH ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_PATCH} CACHE INTERNAL "") -set(CUB_VERSION_TWEAK ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_TWEAK} CACHE INTERNAL "") -set(CUB_VERSION_COUNT ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_COUNT} CACHE INTERNAL "") diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/gather.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/gather.h deleted file mode 100644 index 242da3c9095757a2c7de9e0b97ae5fe4118c8172..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/gather.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the gather.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch gather - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_GATHER_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/gather.h> -#include __THRUST_HOST_SYSTEM_GATHER_HEADER -#undef __THRUST_HOST_SYSTEM_GATHER_HEADER - -#define __THRUST_DEVICE_SYSTEM_GATHER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/gather.h> -#include __THRUST_DEVICE_SYSTEM_GATHER_HEADER -#undef __THRUST_DEVICE_SYSTEM_GATHER_HEADER - diff --git a/spaces/maitri-vv/Hrishikesh332-autotrain-meme-classification-42897109437/README.md b/spaces/maitri-vv/Hrishikesh332-autotrain-meme-classification-42897109437/README.md deleted file mode 100644 index 5445a745fcc3929394787b79010ff412ea494bbe..0000000000000000000000000000000000000000 --- a/spaces/maitri-vv/Hrishikesh332-autotrain-meme-classification-42897109437/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hrishikesh332 Autotrain Meme Classification 42897109437 -emoji: 📈 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm.py deleted file mode 100644 index 63661389782806ea2182c049448df5d05fc6d2f1..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/tests/test_numeric_batchnorm.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# File : test_numeric_batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. - -import unittest - -import torch -import torch.nn as nn -from torch.autograd import Variable - -from sync_batchnorm.unittest import TorchTestCase - - -def handy_var(a, unbias=True): - n = a.size(0) - asum = a.sum(dim=0) - as_sum = (a ** 2).sum(dim=0) # a square sum - sumvar = as_sum - asum * asum / n - if unbias: - return sumvar / (n - 1) - else: - return sumvar / n - - -class NumericTestCase(TorchTestCase): - def testNumericBatchNorm(self): - a = torch.rand(16, 10) - bn = nn.BatchNorm1d(10, momentum=1, eps=1e-5, affine=False) - bn.train() - - a_var1 = Variable(a, requires_grad=True) - b_var1 = bn(a_var1) - loss1 = b_var1.sum() - loss1.backward() - - a_var2 = Variable(a, requires_grad=True) - a_mean2 = a_var2.mean(dim=0, keepdim=True) - a_std2 = torch.sqrt(handy_var(a_var2, unbias=False).clamp(min=1e-5)) - # a_std2 = torch.sqrt(a_var2.var(dim=0, keepdim=True, unbiased=False) + 1e-5) - b_var2 = (a_var2 - a_mean2) / a_std2 - loss2 = b_var2.sum() - loss2.backward() - - self.assertTensorClose(bn.running_mean, a.mean(dim=0)) - self.assertTensorClose(bn.running_var, handy_var(a)) - self.assertTensorClose(a_var1.data, a_var2.data) - self.assertTensorClose(b_var1.data, b_var2.data) - self.assertTensorClose(a_var1.grad, a_var2.grad) - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/masterkram/finance_news_classifier/README.md b/spaces/masterkram/finance_news_classifier/README.md deleted file mode 100644 index 665c8249fe3d3269e4eec0f6bd0073fee24b940a..0000000000000000000000000000000000000000 --- a/spaces/masterkram/finance_news_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Finance News Classifier -emoji: 💸 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/matthoffner/starchat-ui/components/Promptbar/PromptBar.context.tsx b/spaces/matthoffner/starchat-ui/components/Promptbar/PromptBar.context.tsx deleted file mode 100644 index 80f9f5b18b9315f7d1db2d53c52b7cad04b92f53..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/components/Promptbar/PromptBar.context.tsx +++ /dev/null @@ -1,19 +0,0 @@ -import { Dispatch, createContext } from 'react'; - -import { ActionType } from '@/hooks/useCreateReducer'; - -import { Prompt } from '@/types/prompt'; - -import { PromptbarInitialState } from './Promptbar.state'; - -export interface PromptbarContextProps { - state: PromptbarInitialState; - dispatch: Dispatch>; - handleCreatePrompt: () => void; - handleDeletePrompt: (prompt: Prompt) => void; - handleUpdatePrompt: (prompt: Prompt) => void; -} - -const PromptbarContext = createContext(undefined!); - -export default PromptbarContext; diff --git a/spaces/mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS/app.py b/spaces/mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS/app.py deleted file mode 100644 index f99a593a98ec4fe168292cd313762a70994468ee..0000000000000000000000000000000000000000 --- a/spaces/mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS/app.py +++ /dev/null @@ -1,90 +0,0 @@ -import gradio as gr -import tempfile -from TTS.utils.synthesizer import Synthesizer -from huggingface_hub import hf_hub_download - -REPO_ID = "mbarnig/lb-de-fr-en-pt-coqui-vits-tts" - -my_title = "🇩🇪 🇫🇷 🇬🇧 🇵🇹 Mir schwätzen och Lëtzebuergesch ! 🇱🇺" -my_description = "First multilingual-multispeaker Text-to-Speech (TTS) synthesizer speaking Luxembourgish. This model is based on [YourTTS](https://github.com/Edresson/YourTTS), thanks to 🐸 [Coqui.ai](https://coqui.ai/)." -lb_text = "An der Zäit hunn sech den Nordwand an d'Sonn gestridden, wie vun hinnen zwee wuel méi staark wier, wéi e Wanderer, deen an ee waarme Mantel agepak war, iwwert de Wee koum." -de_text = "Einst stritten sich Nordwind und Sonne, wer von ihnen beiden wohl der Stärkere wäre, als ein Wanderer, der in einen warmen Mantel gehüllt war, des Weges daherkam." -fr_text = "La bise et le soleil se disputaient, chacun assurant qu'il était le plus fort, quand ils ont vu un voyageur qui s'avançait, enveloppé dans son manteau." -en_text = "The North Wind and the Sun were disputing which was the stronger, when a traveler came along wrapped in a warm cloak." -pt_text = "O vento norte e o Sol discutiam quem era o mais forte, quando surgiu um viajante envolvido numa capa." - -TTS_VOICES = [ - "Bernard", - "Bunny", - "Ed", - "Guy", - "Judith", - "Kerstin", - "Linda", - "Thorsten" -] - -TTS_LANGUAGES = [ - "Deutsch", - "English", - "Français", - "Lëtzebuergesch", - "Português" -] - -my_examples = [ - [lb_text, "Judith", "Lëtzebuergesch"], - [de_text, "Thorsten", "Deutsch"], - [fr_text, "Bernard", "Français"], - [en_text, "Linda", "English"], - [pt_text, "Ed", "Português"] -] - -my_article = "

            User guide

            1. Press the Submit button to generate a speech file with the default values. 2. Change the default values by clicking an example row. 3. Select a language and a voice and enter your own text. Have fun!

            Go to Internet with a Brain to read some technical infos.

            " - -my_inputs = [ - gr.inputs.Textbox(lines=5, label="Input Text", default=lb_text), - gr.inputs.Radio(label="Speaker", choices = TTS_VOICES, default = "Judith"), - gr.inputs.Radio(label="Language", choices = TTS_LANGUAGES, default = "Lëtzebuergesch"), -] - -my_outputs = gr.outputs.Audio(type="file", label="Output Audio") - -def tts(text: str, speaker_idx: str, language_idx: str): - best_model_path = hf_hub_download(repo_id=REPO_ID, filename="best_model.pth") - config_path = hf_hub_download(repo_id=REPO_ID, filename="config.json") - speakers_path = hf_hub_download(repo_id=REPO_ID, filename="speakers.pth") - languages_path = hf_hub_download(repo_id=REPO_ID, filename="language_ids.json") - speaker_encoder_model_path = hf_hub_download(repo_id=REPO_ID, filename="model_se.pth") - speaker_encoder_config_path = hf_hub_download(repo_id=REPO_ID, filename="config_se.json") - - # init synthesizer - synthesizer = Synthesizer( - best_model_path, - config_path, - speakers_path, - languages_path, - None, - None, - speaker_encoder_model_path, - speaker_encoder_config_path, - False - ) - - # create audio file - wavs = synthesizer.tts(text, speaker_idx, language_idx) - with tempfile.NamedTemporaryFile(suffix = ".wav", delete = False) as fp: - synthesizer.save_wav(wavs, fp) - return fp.name - -iface = gr.Interface( - fn=tts, - inputs=my_inputs, - outputs=my_outputs, - title=my_title, - description = my_description, - article = my_article, - examples = my_examples, - allow_flagging=False -) -iface.launch() diff --git a/spaces/meraGPT/meraKB/components_keys.py b/spaces/meraGPT/meraKB/components_keys.py deleted file mode 100644 index bcdd110b9544137023c7e458fdd2a02a6c5f423d..0000000000000000000000000000000000000000 --- a/spaces/meraGPT/meraKB/components_keys.py +++ /dev/null @@ -1,4 +0,0 @@ -"""Store streamlit component keys""" - -class ComponentsKeys: - FILE_UPLOADER = "file_uploader" diff --git a/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/draw_weathergraph.js b/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/draw_weathergraph.js deleted file mode 100644 index 068615fb14b8e5d27869a0d270d8f0c5580e4fcc..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/uncertainty-calibration/draw_weathergraph.js +++ /dev/null @@ -1,264 +0,0 @@ -window.drawWeatherGraph = function (graphSel, fig_height, fig_width){ - - var threshold = .4 - - var thresholds = [0, .2, .4, .6, .8, 1].map((val, i) => { - var isLocked = val == 0 || val == 1 - return {val, i, isLocked, origVal: val} - }) - - var c = d3.conventions({ - sel: graphSel.html('').append('div'), - height: fig_height, - totalWidth: fig_width, - margin: {top: 100, bottom: 100} - }); - - var {predictionSel, weatherGroupSel} = (function(){ - c.y.domain([0,9]).clamp(true); - - // x-Axis - c.xAxis.ticks(5).tickFormat(d3.format('.2f')) - c.yAxis.ticks(0) - d3.drawAxis(c) - c.svg.select('.x') - .translate(-40, 1) - .selectAll('line').translate(20, 1) - - // x-Axis label - c.svg.append('text.axis-label') - .translate([c.width/2, -50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - .text('Model Score'); - - // Weather icons - var weatherGroupSel = c.svg.appendMany('g.weatherdata', weatherdata) - .translate(d => [c.x(d.score), c.y(d.h)]) - //.call(d3.attachTooltip) - // .on("mouseover", function(d) { - // ttSel.html(""); - // var gtSel = ttSel.append("div").html(`ground truth: ${d.label}`); - // ttSel.classed("tt-text", true); - // }) - - weatherGroupSel.append('text.icon') - .text(function(d,i){return emojis[d.label];}) - .at({fontSize: 18, textAnchor: 'middle', dy: 8}) - - // Add prediction circles - weatherGroupSel.append('circle.prediction') - .at({cx: 0, cy: 0, r: 14, opacity: 0, fillOpacity: 0, stroke: 'red'}); - weatherGroupSel.append('path.prediction') - .at({d: d => ['M', -10, 10, 'L', 10, -10].join(' '), stroke: 'red', opacity: 0}) - - var predictionSel = c.svg.selectAll('.prediction'); - - return {predictionSel, weatherGroupSel} - })() - - var {thresholdSel, messageSel, setThreshold} = (function(){ - var thresholdSel = c.svg.append('g.threshold') - - var thresholdGroupSel = thresholdSel.append('g') - .call(d3.drag().on('drag', - () => renderThreshold(c.x.invert(d3.clamp(0, d3.event.x, c.width)))) - ) - - var thesholdTextSel = thresholdGroupSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 30 - }) - .text('Threshold') - - var rw = 16 - thresholdGroupSel.append('rect') - .at({ - width: rw, - x: -rw/2, - y: -10, - height: c.height + 30, - fillOpacity: .07, - }) - - var pathSel = thresholdGroupSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - d: `M 0 -10 V ` + (c.height + 20), - }) - - - var accuracyValBox = thresholdSel.append('rect.val-box') - .at({width: 55, height: 20, x: c.width/2 + 32.5, y: c.height + 65, rx: 3, ry: 3}) - - var accuracySel = thresholdSel.append('text.big-text') - .at({x: c.width/2 - 10, y: c.height + 80, textAnchor: 'middle'}) - - var accuracyValSel = thresholdSel.append('text.val-text') - .at({x: c.width/2 + 60, y: c.height + 80, textAnchor: 'middle'}) - - - var messageSel = thresholdSel.append('text.tmessage') - .at({x: c.width/2, y: c.height + 120, textAnchor: 'middle'}) - - function renderThreshold(t){ - if (isNaN(t)) return // TODO debug this - - thresholdGroupSel.translate(c.x(t), 0) - - predictionSel.at({opacity: d => isClassifiedCorrectly(d, t) ? 0 : 1}) - - var acc = d3.mean( - weatherdata, - d => isClassifiedCorrectly(d, t) - ) - accuracySel.text('Accuracy: '); - accuracyValSel.text(d3.format('.1%')(acc)) - messageSel.text('Try dragging the threshold to find the highest accuracy.') - thesholdTextSel.text('Threshold: ' + d3.format('.2f')(t)) - - threshold = t - - function isClassifiedCorrectly(d,t) { - return d.score >= t ? d.label == 1 : d.label == 0; - }; - } - - renderThreshold(threshold) - - var timer = null - function setThreshold(newThreshold, duration){ - var interpolateFn = d3.interpolate(threshold, newThreshold) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - renderThreshold(interpolateFn(t)) - }) - } - - return {thresholdSel, messageSel, setThreshold} - })() - - function drawTrueLegend(c){ - var truthAxis = c.svg.append('g').translate([fig_width + 40, 1]) - truthAxis.append('text.legend-title').text('Truth') // TODO: Maybe more of a label? "what actually happened?" or just remove this legend - .at({textAnchor: 'middle', fontWeight: 500, x: 20}) - - truthAxis.append('g').translate([20, 40]) - .append('text.legend-text').text('Sunny').parent() - .at({fontSize: 15}) - .append('text').text(emojis[0]) - .at({fontSize: 25, x: -30, y: 5}) - - truthAxis.append('g').translate([20, 80]) - .append('text.legend-text').text('Rainy').parent() - .at({fontSize: 15}) - .append('text').text(emojis[1]) - .at({fontSize: 25, x: -30, y: 5}) - } - drawTrueLegend(c); - - - var {thresholdsGroupSel, renderThresholds, setThresholds} = (function(){ - var valsCache = [] - var drag = d3.drag() - .on('drag', function(){ - var val = d3.clamp(0, c.x.invert(d3.mouse(c.svg.node())[0]), 1) - - // Force thresholds to stay sorted - valsCache[valsCache.activeIndex] = val - _.sortBy(valsCache).forEach((val, i) => thresholds[i].val = val) - - renderThresholds() - }) - .on('start', d => { - valsCache = thresholds.map(d => d.val) - valsCache.activeIndex = d.i - }) - - var thresholdsGroupSel = c.svg.append('g') - - thresholdsGroupSel.append('text.axis-label') - .text('Calibrated Model Score') - .translate([c.width/2, c.height + 50]) - .at({textAnchor: 'middle'}) - .at({fill: '#000', fontSize: 14}) - - thresholdsSel = thresholdsGroupSel.appendMany('g.thresholds', thresholds) - .call(drag) - .st({pointerEvents: d => d.isLocked ? 'none' : ''}) - - thresholdsSel.append('g.axis').append('text') - .at({ - textAnchor: 'middle', - dy: '.33em', - y: c.height + 20 - }) - .text(d => d3.format('.2f')(d.origVal)) - - var rw = 16 - thresholdsSel.append('rect') - .at({ - width: rw, - x: -rw/2, - height: c.height + 10, - fillOpacity: d => d.isLocked ? 0 : .07, - }) - - var pathSel = thresholdsSel.append('path') - .at({ - stroke: '#000', - strokeDasharray: '2 2', - fill: 'none', - }) - - function renderThresholds(){ - if (thresholds.some(d => isNaN(d.val))) return - - thresholdsSel - .translate(d => c.x(d.val) + .5, 0) - - pathSel.at({ - d: d => [ - 'M', 0, c.height + 10, - 'L', 0, 0, - 'L', c.x(d.origVal - d.val), -12, - ].join(' ') - }) - - if (window.calibrationCurve) calibrationCurve.renderBuckets() - } - - renderThresholds() - - var timer = null - function setThresholds(newThresholds, duration){ - var interpolateFns = thresholds - .map((d, i) => d3.interpolate(d.val, newThresholds[i])) - - if (timer) timer.stop() - timer = d3.timer(ms => { - var t = Math.min(ms/duration, 1) - if (t == 1) timer.stop() - - thresholds.forEach((d, i) => d.val = interpolateFns[i](t)) - - renderThresholds() - }) - } - - return {thresholdsGroupSel, renderThresholds, setThresholds} - })() - - return {c, thresholdSel, messageSel, setThreshold, predictionSel, thresholds, thresholdsGroupSel, renderThresholds, setThresholds, weatherGroupSel}; - -} - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/merve/hidden-bias/source/measuring-diversity/columns-height.js b/spaces/merve/hidden-bias/source/measuring-diversity/columns-height.js deleted file mode 100644 index 3933c17b4bb8abe209b3573bb436c53c47543b1b..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/measuring-diversity/columns-height.js +++ /dev/null @@ -1,177 +0,0 @@ -window.initColumns = function(id, metrics, measures){ - var c = d3.conventions({ - sel: d3.select(id).html('').st({width: 775, margin: '0px auto', left: 27}), - margin: {left: 260, top: 40}, - height: 600, - }) - - var sets = d3.range(numRows).map(i => { - var shapes = columnShapes[i] - shapes = _.sortBy(shapes, d => d.shape) - shapes = _.sortBy(shapes, d => d.size) - shapes = _.sortBy(shapes, d => d.color) - shapes = _.sortBy(shapes, d => d.color == 'green' ? 0 : 1) - - - shapes.nG = d3.sum(shapes, d => d.color == 'green') - shapes.nB = d3.sum(shapes, d => d.color == 'blue') - shapes.nO = d3.sum(shapes, d => d.color == 'orange') - shapes.nR = d3.sum(shapes, d => d.color == 'red') - - shapes.forEach((d, i) => { - d.i = i - d.sizeVal = d.sizeVal < 1 ? .6 : 1 - }) - shapes.i = i - return shapes - }) - - var colW = 200 - var colWpad = 50 - var colH = 20 - var colHpad = 10 - var offsetW = -20 - - var colSel = c.svg.appendMany('g', measures) - .translate((d, i) => [.5 + i*(colW + colWpad) + offsetW, .5]) - - colSel.append('text').text(d => d.ranking_display_text) - .at({y: -20, textAnchor: 'middle', x: colW/2, fontWeight: 600, }) - - var rowSel = colSel.appendMany('g.row', sets) - .translate(d => d.i*(colH + colHpad), 1) - - var colMean = colSel.filter((d, i) => i === 0) - var colMin = colSel.filter((d, i) => i === 1) - var scoreLabelsMean = colMean.selectAll('.row').append('text') - .at({x: -5, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - var scoreLabelsMin = colMin.selectAll('.row').append('text') - .at({x: 222, y: 15, textAnchor: 'end'}) - .st({fontSize: '13px', opacity: .7}) - - colSel.each(function(d, i){ - d.rowSel = d3.select(this).selectAll('.row') - - c.svg.append('marker') - .attr('id', 'arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path') - .attr('d', 'M-6.75,-6.75 L 0,0 L -6.75,6.75') - .at({fill: '#000'}) - - - if (i){ - var pathstr = ['M', 160, -25, 'C', 215, -25, 215, -25, 215, -5].join(' ') - } else{ - var pathstr = ['M', 35, -25, 'C', -20, -25, -20, -25, -20, -5].join(' ') - } - d3.select(this).append('path') - .at({stroke: '#000', fill: 'none', d: pathstr, markerEnd: 'url(#arrow)', strokeWidth: .6}) - }) - - - var s = colH - var p = 2 - - var l0Sel = c.svg.appendMany('path.set', sets).classed('set1', true) - .translate(d => [colW + offsetW, s/2 + .5]) - - drawRow(rowSel) - function drawRow(rowSel){ - rowSel.append('rect.set.no-stroke') - .at({x: -p, y: -p, width: colW + p*2, height: colH + p*2, fill: '#fff'}).classed('set1', true) - - rowSel.appendMany('g', d => d) - .translate(d => [d.i*s + s/2, s/2]) - .each(function(d){ - - var sOffset = 12 - var classNames = [d.shape, d.size, d.color, 'rank-item'].join(' ') - var shapeSel = d3.select(this).append('rect') - .at({ - x: -s/2, - y: -s/2 + (d.size == 'small' ? sOffset/2 : 0) - .5, - width: s - .5, - height: s - (d.size == 'small' ? sOffset : 0), - fill: d.fill, - class: classNames - }) - - if (d.shape == 'triangle'){ - var shapeSel = d3.select(this).append('circle') - .at({r: 2, fill: '#fff', stroke: '#000', strokeWidth: .5, class: classNames}) - } - }) - - } - - var setSel = c.svg.selectAll('.set1') - .on('mouseover', selectSet) - - sets.selected = sets[0] - function selectSet(set){ - sets.selected = set - sets.forEach(d => d.selected = d == set) - setSel - .classed('selected', d => d.selected) - .filter(d => d.selected) - .lower() - - rowSel.classed('selected', d => d.selected) - - sliders.render() - } - - - var sliders = makeSliders(metrics, sets, c, selectSet, drawRow, () => { - sets.forEach(shapes => { - shapes.score = metrics.map(m => { - var v = d3.sum(shapes, (d, i) => shapes[i][m.field] == m.key) - return Math.abs(m.target - v/shapes.length) - }) - }) - - measures.forEach(m => { - sets.forEach(shapes => { - shapes[m.str] = m.fn(shapes.score) - }) - _.sortBy(sets, d => d[m.str] + d.i/10000000)//.reverse() - .forEach((d, i) => d['i' + m.str] = i) - - m.rowSel.translate(d => d['i' + m.str]*(colH + colHpad), 1) - }) - - var p = 0 - l0Sel.at({d: d => [ - 'M', p, d['iUtilitarian']*(colH + colHpad), - 'L', colWpad - p, d['iEgalitarian']*(colH + colHpad), - ].join(' ')}) - - - scoreLabelsMean.text(d => { - return d3.format('.2f')(d['Utilitarian'])// + '%' - }) - scoreLabelsMin.text(d => { - return measures[1].ppFn(d['score']).replace('%', '')// + '%' - }) - }) - - sliders.render() - selectSet(_.sortBy(sets, d => d.iEgalitarian)[0]) -} -window.initColumns('#columns-height', metrics1, measures) -window.initColumns('#columns-height-disagree', metrics2, measures2) - -// Only highlight green items in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.rank-item').at({opacity: .3}) -d3.select('#columns-height-disagree').selectAll('.green').at({opacity: 1}) - -// Only highlight the green slider in the second ranking chart. -d3.select('#columns-height-disagree').selectAll('.slider').at({opacity: d => { - return d.key !== 'green' ? 0.35: 1 -}}) - diff --git a/spaces/merve/uncertainty-calibration/public/third_party/recirc.js b/spaces/merve/uncertainty-calibration/public/third_party/recirc.js deleted file mode 100644 index 37b65f4b8cf3c3ba504a0a3b906f8c19befc6730..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/third_party/recirc.js +++ /dev/null @@ -1,58 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -d3.loadData('../posts.json', (err, res) => { - var posts = res[0] - .filter(d => !window.location.href.includes(d.permalink)) - .filter(d => d.shareimg.includes('http')) - posts = d3.shuffle(posts) - - var isMobile = innerWidth < 900 - var postSel = d3.select('#recirc').html('').appendMany('a.post', posts) - .st({ - width: isMobile ? '100%' : '330px', - display: 'inline-block', - verticalAlign: 'top', - marginRight: isMobile ? 0 : 30, - textDecoration: 'none', - }) - .at({href: d => '..' + d.permalink}) - - - postSel.append('div.img') - .st({ - width: '100%', - height: 200, - backgroundImage: d => `url(${d.shareimgabstract || d.shareimg})`, - backgroundSize: 'cover', - backgroundPosition: 'center', - }) - - postSel.append('p.title') - .text(d => d.shorttitle || d.title) - .st({ - verticalAlign: 'top', - marginTop: 10, - textDecoration: 'none', - }) - - postSel.append('p.summary') - .text(d => d.socialsummary || d.summary) - - -}) \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/footnote.css b/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/footnote.css deleted file mode 100644 index 83472e6bc26c962b1c2fcc630d641ed62f181e77..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/uncertainty-calibration/footnote.css +++ /dev/null @@ -1,57 +0,0 @@ -.tooltip-footnote { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px !important; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip-footnote a{ - color: #fff !important; -} -.tooltip-footnote:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-footnote-hidden{ - opacity: 0; - transition: opacity .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - div.tooltip-footnote{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -.footstart{ - padding-left: 2px; - height: 8px !important; - /*background: red;*/ - /*display: inline-block;*/ - line-height: 0em; -} diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/training_loop.py b/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/training_loop.py deleted file mode 100644 index d9ccb45b1a0321f1d938efa6a62229ffe396dcfe..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan/stylegan_tf/training/training_loop.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Main training script.""" - -import os -import numpy as np -import tensorflow as tf -import dnnlib -import dnnlib.tflib as tflib -from dnnlib.tflib.autosummary import autosummary - -import config -import train -from training import dataset -from training import misc -from metrics import metric_base - -#---------------------------------------------------------------------------- -# Just-in-time processing of training images before feeding them to the networks. - -def process_reals(x, lod, mirror_augment, drange_data, drange_net): - with tf.name_scope('ProcessReals'): - with tf.name_scope('DynamicRange'): - x = tf.cast(x, tf.float32) - x = misc.adjust_dynamic_range(x, drange_data, drange_net) - if mirror_augment: - with tf.name_scope('MirrorAugment'): - s = tf.shape(x) - mask = tf.random_uniform([s[0], 1, 1, 1], 0.0, 1.0) - mask = tf.tile(mask, [1, s[1], s[2], s[3]]) - x = tf.where(mask < 0.5, x, tf.reverse(x, axis=[3])) - with tf.name_scope('FadeLOD'): # Smooth crossfade between consecutive levels-of-detail. - s = tf.shape(x) - y = tf.reshape(x, [-1, s[1], s[2]//2, 2, s[3]//2, 2]) - y = tf.reduce_mean(y, axis=[3, 5], keepdims=True) - y = tf.tile(y, [1, 1, 1, 2, 1, 2]) - y = tf.reshape(y, [-1, s[1], s[2], s[3]]) - x = tflib.lerp(x, y, lod - tf.floor(lod)) - with tf.name_scope('UpscaleLOD'): # Upscale to match the expected input/output size of the networks. - s = tf.shape(x) - factor = tf.cast(2 ** tf.floor(lod), tf.int32) - x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1]) - x = tf.tile(x, [1, 1, 1, factor, 1, factor]) - x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor]) - return x - -#---------------------------------------------------------------------------- -# Evaluate time-varying training parameters. - -def training_schedule( - cur_nimg, - training_set, - num_gpus, - lod_initial_resolution = 4, # Image resolution used at the beginning. - lod_training_kimg = 600, # Thousands of real images to show before doubling the resolution. - lod_transition_kimg = 600, # Thousands of real images to show when fading in new layers. - minibatch_base = 16, # Maximum minibatch size, divided evenly among GPUs. - minibatch_dict = {}, # Resolution-specific overrides. - max_minibatch_per_gpu = {}, # Resolution-specific maximum minibatch size per GPU. - G_lrate_base = 0.001, # Learning rate for the generator. - G_lrate_dict = {}, # Resolution-specific overrides. - D_lrate_base = 0.001, # Learning rate for the discriminator. - D_lrate_dict = {}, # Resolution-specific overrides. - lrate_rampup_kimg = 0, # Duration of learning rate ramp-up. - tick_kimg_base = 160, # Default interval of progress snapshots. - tick_kimg_dict = {4: 160, 8:140, 16:120, 32:100, 64:80, 128:60, 256:40, 512:30, 1024:20}): # Resolution-specific overrides. - - # Initialize result dict. - s = dnnlib.EasyDict() - s.kimg = cur_nimg / 1000.0 - - # Training phase. - phase_dur = lod_training_kimg + lod_transition_kimg - phase_idx = int(np.floor(s.kimg / phase_dur)) if phase_dur > 0 else 0 - phase_kimg = s.kimg - phase_idx * phase_dur - - # Level-of-detail and resolution. - s.lod = training_set.resolution_log2 - s.lod -= np.floor(np.log2(lod_initial_resolution)) - s.lod -= phase_idx - if lod_transition_kimg > 0: - s.lod -= max(phase_kimg - lod_training_kimg, 0.0) / lod_transition_kimg - s.lod = max(s.lod, 0.0) - s.resolution = 2 ** (training_set.resolution_log2 - int(np.floor(s.lod))) - - # Minibatch size. - s.minibatch = minibatch_dict.get(s.resolution, minibatch_base) - s.minibatch -= s.minibatch % num_gpus - if s.resolution in max_minibatch_per_gpu: - s.minibatch = min(s.minibatch, max_minibatch_per_gpu[s.resolution] * num_gpus) - - # Learning rate. - s.G_lrate = G_lrate_dict.get(s.resolution, G_lrate_base) - s.D_lrate = D_lrate_dict.get(s.resolution, D_lrate_base) - if lrate_rampup_kimg > 0: - rampup = min(s.kimg / lrate_rampup_kimg, 1.0) - s.G_lrate *= rampup - s.D_lrate *= rampup - - # Other parameters. - s.tick_kimg = tick_kimg_dict.get(s.resolution, tick_kimg_base) - return s - -#---------------------------------------------------------------------------- -# Main training script. - -def training_loop( - submit_config, - G_args = {}, # Options for generator network. - D_args = {}, # Options for discriminator network. - G_opt_args = {}, # Options for generator optimizer. - D_opt_args = {}, # Options for discriminator optimizer. - G_loss_args = {}, # Options for generator loss. - D_loss_args = {}, # Options for discriminator loss. - dataset_args = {}, # Options for dataset.load_dataset(). - sched_args = {}, # Options for train.TrainingSchedule. - grid_args = {}, # Options for train.setup_snapshot_image_grid(). - metric_arg_list = [], # Options for MetricGroup. - tf_config = {}, # Options for tflib.init_tf(). - G_smoothing_kimg = 10.0, # Half-life of the running average of generator weights. - D_repeats = 1, # How many times the discriminator is trained per G iteration. - minibatch_repeats = 4, # Number of minibatches to run before adjusting training parameters. - reset_opt_for_new_lod = True, # Reset optimizer internal state (e.g. Adam moments) when new layers are introduced? - total_kimg = 15000, # Total length of the training, measured in thousands of real images. - mirror_augment = False, # Enable mirror augment? - drange_net = [-1,1], # Dynamic range used when feeding image data to the networks. - image_snapshot_ticks = 1, # How often to export image snapshots? - network_snapshot_ticks = 10, # How often to export network snapshots? - save_tf_graph = False, # Include full TensorFlow computation graph in the tfevents file? - save_weight_histograms = False, # Include weight histograms in the tfevents file? - resume_run_id = None, # Run ID or network pkl to resume training from, None = start from scratch. - resume_snapshot = None, # Snapshot index to resume training from, None = autodetect. - resume_kimg = 0.0, # Assumed training progress at the beginning. Affects reporting and training schedule. - resume_time = 0.0): # Assumed wallclock time at the beginning. Affects reporting. - - # Initialize dnnlib and TensorFlow. - ctx = dnnlib.RunContext(submit_config, train) - tflib.init_tf(tf_config) - - # Load training set. - training_set = dataset.load_dataset(data_dir=config.data_dir, verbose=True, **dataset_args) - - # Construct networks. - with tf.device('/gpu:0'): - if resume_run_id is not None: - network_pkl = misc.locate_network_pkl(resume_run_id, resume_snapshot) - print('Loading networks from "%s"...' % network_pkl) - G, D, Gs = misc.load_pkl(network_pkl) - else: - print('Constructing networks...') - G = tflib.Network('G', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **G_args) - D = tflib.Network('D', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **D_args) - Gs = G.clone('Gs') - G.print_layers(); D.print_layers() - - print('Building TensorFlow graph...') - with tf.name_scope('Inputs'), tf.device('/cpu:0'): - lod_in = tf.placeholder(tf.float32, name='lod_in', shape=[]) - lrate_in = tf.placeholder(tf.float32, name='lrate_in', shape=[]) - minibatch_in = tf.placeholder(tf.int32, name='minibatch_in', shape=[]) - minibatch_split = minibatch_in // submit_config.num_gpus - Gs_beta = 0.5 ** tf.div(tf.cast(minibatch_in, tf.float32), G_smoothing_kimg * 1000.0) if G_smoothing_kimg > 0.0 else 0.0 - - G_opt = tflib.Optimizer(name='TrainG', learning_rate=lrate_in, **G_opt_args) - D_opt = tflib.Optimizer(name='TrainD', learning_rate=lrate_in, **D_opt_args) - for gpu in range(submit_config.num_gpus): - with tf.name_scope('GPU%d' % gpu), tf.device('/gpu:%d' % gpu): - G_gpu = G if gpu == 0 else G.clone(G.name + '_shadow') - D_gpu = D if gpu == 0 else D.clone(D.name + '_shadow') - lod_assign_ops = [tf.assign(G_gpu.find_var('lod'), lod_in), tf.assign(D_gpu.find_var('lod'), lod_in)] - reals, labels = training_set.get_minibatch_tf() - reals = process_reals(reals, lod_in, mirror_augment, training_set.dynamic_range, drange_net) - with tf.name_scope('G_loss'), tf.control_dependencies(lod_assign_ops): - G_loss = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, opt=G_opt, training_set=training_set, minibatch_size=minibatch_split, **G_loss_args) - with tf.name_scope('D_loss'), tf.control_dependencies(lod_assign_ops): - D_loss = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals, labels=labels, **D_loss_args) - G_opt.register_gradients(tf.reduce_mean(G_loss), G_gpu.trainables) - D_opt.register_gradients(tf.reduce_mean(D_loss), D_gpu.trainables) - G_train_op = G_opt.apply_updates() - D_train_op = D_opt.apply_updates() - - Gs_update_op = Gs.setup_as_moving_average_of(G, beta=Gs_beta) - with tf.device('/gpu:0'): - try: - peak_gpu_mem_op = tf.contrib.memory_stats.MaxBytesInUse() - except tf.errors.NotFoundError: - peak_gpu_mem_op = tf.constant(0) - - print('Setting up snapshot image grid...') - grid_size, grid_reals, grid_labels, grid_latents = misc.setup_snapshot_image_grid(G, training_set, **grid_args) - sched = training_schedule(cur_nimg=total_kimg*1000, training_set=training_set, num_gpus=submit_config.num_gpus, **sched_args) - grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch//submit_config.num_gpus) - - print('Setting up run dir...') - misc.save_image_grid(grid_reals, os.path.join(submit_config.run_dir, 'reals.png'), drange=training_set.dynamic_range, grid_size=grid_size) - misc.save_image_grid(grid_fakes, os.path.join(submit_config.run_dir, 'fakes%06d.png' % resume_kimg), drange=drange_net, grid_size=grid_size) - summary_log = tf.summary.FileWriter(submit_config.run_dir) - if save_tf_graph: - summary_log.add_graph(tf.get_default_graph()) - if save_weight_histograms: - G.setup_weight_histograms(); D.setup_weight_histograms() - metrics = metric_base.MetricGroup(metric_arg_list) - - print('Training...\n') - ctx.update('', cur_epoch=resume_kimg, max_epoch=total_kimg) - maintenance_time = ctx.get_last_update_interval() - cur_nimg = int(resume_kimg * 1000) - cur_tick = 0 - tick_start_nimg = cur_nimg - prev_lod = -1.0 - while cur_nimg < total_kimg * 1000: - if ctx.should_stop(): break - - # Choose training parameters and configure training ops. - sched = training_schedule(cur_nimg=cur_nimg, training_set=training_set, num_gpus=submit_config.num_gpus, **sched_args) - training_set.configure(sched.minibatch // submit_config.num_gpus, sched.lod) - if reset_opt_for_new_lod: - if np.floor(sched.lod) != np.floor(prev_lod) or np.ceil(sched.lod) != np.ceil(prev_lod): - G_opt.reset_optimizer_state(); D_opt.reset_optimizer_state() - prev_lod = sched.lod - - # Run training ops. - for _mb_repeat in range(minibatch_repeats): - for _D_repeat in range(D_repeats): - tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) - cur_nimg += sched.minibatch - tflib.run([G_train_op], {lod_in: sched.lod, lrate_in: sched.G_lrate, minibatch_in: sched.minibatch}) - - # Perform maintenance tasks once per tick. - done = (cur_nimg >= total_kimg * 1000) - if cur_nimg >= tick_start_nimg + sched.tick_kimg * 1000 or done: - cur_tick += 1 - tick_kimg = (cur_nimg - tick_start_nimg) / 1000.0 - tick_start_nimg = cur_nimg - tick_time = ctx.get_time_since_last_update() - total_time = ctx.get_time_since_start() + resume_time - - # Report progress. - print('tick %-5d kimg %-8.1f lod %-5.2f minibatch %-4d time %-12s sec/tick %-7.1f sec/kimg %-7.2f maintenance %-6.1f gpumem %-4.1f' % ( - autosummary('Progress/tick', cur_tick), - autosummary('Progress/kimg', cur_nimg / 1000.0), - autosummary('Progress/lod', sched.lod), - autosummary('Progress/minibatch', sched.minibatch), - dnnlib.util.format_time(autosummary('Timing/total_sec', total_time)), - autosummary('Timing/sec_per_tick', tick_time), - autosummary('Timing/sec_per_kimg', tick_time / tick_kimg), - autosummary('Timing/maintenance_sec', maintenance_time), - autosummary('Resources/peak_gpu_mem_gb', peak_gpu_mem_op.eval() / 2**30))) - autosummary('Timing/total_hours', total_time / (60.0 * 60.0)) - autosummary('Timing/total_days', total_time / (24.0 * 60.0 * 60.0)) - - # Save snapshots. - if cur_tick % image_snapshot_ticks == 0 or done: - grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch//submit_config.num_gpus) - misc.save_image_grid(grid_fakes, os.path.join(submit_config.run_dir, 'fakes%06d.png' % (cur_nimg // 1000)), drange=drange_net, grid_size=grid_size) - if cur_tick % network_snapshot_ticks == 0 or done or cur_tick == 1: - pkl = os.path.join(submit_config.run_dir, 'network-snapshot-%06d.pkl' % (cur_nimg // 1000)) - misc.save_pkl((G, D, Gs), pkl) - metrics.run(pkl, run_dir=submit_config.run_dir, num_gpus=submit_config.num_gpus, tf_config=tf_config) - - # Update summaries and RunContext. - metrics.update_autosummaries() - tflib.autosummary.save_summaries(summary_log, cur_nimg) - ctx.update('%.2f' % sched.lod, cur_epoch=cur_nimg // 1000, max_epoch=total_kimg) - maintenance_time = ctx.get_last_update_interval() - tick_time - - # Write final results. - misc.save_pkl((G, D, Gs), os.path.join(submit_config.run_dir, 'network-final.pkl')) - summary_log.close() - - ctx.close() - -#---------------------------------------------------------------------------- diff --git a/spaces/micole66/electra/README.md b/spaces/micole66/electra/README.md deleted file mode 100644 index f179ccc9883727907348717679bf7a2dfed7e1c3..0000000000000000000000000000000000000000 --- a/spaces/micole66/electra/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Electra -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/mikeee/gradio-deepl/split_text.py b/spaces/mikeee/gradio-deepl/split_text.py deleted file mode 100644 index 1e17844d17c84f22ee331099629d0d054e4c3c7c..0000000000000000000000000000000000000000 --- a/spaces/mikeee/gradio-deepl/split_text.py +++ /dev/null @@ -1,46 +0,0 @@ -"""Split text to limit chars per chunk. - -Converted from splitText.js. -""" -# pylint: disable=invalid-name, broad-except -from typing import Optional - -from logzero import logger - -limit_ = 4900 - - -def split_text(text: str, limit: Optional[int] = None): - """Split text to limit chars per chunk.""" - if not text: # handle text="" - return [text] - - if limit is None: - limit = limit_ - else: - try: - limit = int(limit) - except Exception as exc: - logger.error(exc) - limit = limit_ - if limit < 1: - limit = limit_ - - chunks = [] - paragraphs = text.splitlines() - current_chunk = paragraphs[0] + "\n" - for paragraph in paragraphs[1:]: - if len(current_chunk) + len(paragraph) <= limit: - # Add paragraph to current chunk - current_chunk += paragraph + "\n" - else: - # Save current chunk and start a new one with this paragraph - chunks.append(current_chunk) - current_chunk = paragraph + "\n" - # Add the last chunk - chunks.append(current_chunk) - - # remove extra \n and possible blank in the beginning - # return list(filter(lambda _: _.strip(), map(lambda _: _.strip(), chunks))) - - return chunks diff --git a/spaces/mindart/infinite-zoom-stable-diffusion/helpers/video.py b/spaces/mindart/infinite-zoom-stable-diffusion/helpers/video.py deleted file mode 100644 index a5042d934b008923914cb241e1911ef91e78459d..0000000000000000000000000000000000000000 --- a/spaces/mindart/infinite-zoom-stable-diffusion/helpers/video.py +++ /dev/null @@ -1,39 +0,0 @@ -import cv2 -import numpy as np - - -def write_video(file_path, frames, fps, reversed=True, start_frame_dupe_amount=15, last_frame_dupe_amount=30): - """ - Writes frames to an mp4 video file - :param file_path: Path to output video, must end with .mp4 - :param frames: List of PIL.Image objects - :param fps: Desired frame rate - :param reversed: if order of images to be reversed (default = True) - """ - if reversed == True: - frames.reverse() - - w, h = frames[0].size - fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') - # fourcc = cv2.VideoWriter_fourcc('h', '2', '6', '4') - # fourcc = cv2.VideoWriter_fourcc(*'avc1') - writer = cv2.VideoWriter(file_path, fourcc, fps, (w, h)) - -# start frame duplicated - for x in range(start_frame_dupe_amount): - np_frame = np.array(frames[0].convert('RGB')) - cv_frame = cv2.cvtColor(np_frame, cv2.COLOR_RGB2BGR) - writer.write(cv_frame) - - for frame in frames: - np_frame = np.array(frame.convert('RGB')) - cv_frame = cv2.cvtColor(np_frame, cv2.COLOR_RGB2BGR) - writer.write(cv_frame) - -# last frame duplicated - for x in range(last_frame_dupe_amount): - np_frame = np.array(frames[len(frames) - 1].convert('RGB')) - cv_frame = cv2.cvtColor(np_frame, cv2.COLOR_RGB2BGR) - writer.write(cv_frame) - - writer.release() diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/nodes/5.js b/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/nodes/5.js deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/mmecheri/Rakuten_Streamlit/dataset.py b/spaces/mmecheri/Rakuten_Streamlit/dataset.py deleted file mode 100644 index c68439cf996362e64291ff8d889169361feae052..0000000000000000000000000000000000000000 --- a/spaces/mmecheri/Rakuten_Streamlit/dataset.py +++ /dev/null @@ -1,24 +0,0 @@ -import streamlit as st -import dataset_description, data_exp_Viz, data_preprocessing -from multiapp import MultiApp -from submultiapp import SubMultiApp - -def app(): - - st.title("Jeu de données") - - apps = SubMultiApp(None, 'Donnée') - - apps.add_app("Description", dataset_description.app) - apps.add_app("Exploration et DataViz", data_exp_Viz.app) - apps.add_app("Preprocessing", data_preprocessing.app) - - apps.run() - - -def read_text(homepage_path): - '''The home page. ''' - with open(homepage_path, 'r', encoding='utf-8') as homepage: - homepage = homepage.read().split('------') - st.markdown(homepage[0], unsafe_allow_html=True) - \ No newline at end of file diff --git a/spaces/mms-meta/MMS/uroman/bin/uroman.pl b/spaces/mms-meta/MMS/uroman/bin/uroman.pl deleted file mode 100644 index f1182aee6e5c3422882150b5babeec664b689401..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/bin/uroman.pl +++ /dev/null @@ -1,138 +0,0 @@ -#!/usr/bin/perl -w - -# uroman Nov. 12, 2015 - Apr. 23, 2021 -$version = "v1.2.8"; -# Author: Ulf Hermjakob - -# Usage: uroman.pl {-l [ara|bel|bul|deu|ell|eng|fas|grc|heb|kaz|kir|lav|lit|mkd|mkd2|oss|pnt|rus|srp|srp2|tur|uig|ukr|yid]} {--chart|--offset-mapping} {--no-cache} {--workset} < STDIN -# Example: cat workset.txt | uroman.pl --offset-mapping --workset - -$|=1; - -use FindBin; -use Cwd "abs_path"; -use File::Basename qw(dirname); -use File::Spec; - -my $bin_dir = abs_path(dirname($0)); -my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir()); -my $data_dir = File::Spec->catfile($root_dir, "data"); -my $lib_dir = File::Spec->catfile($root_dir, "lib"); - -use lib "$FindBin::Bin/../lib"; -use NLP::Chinese; -use NLP::Romanizer; -use NLP::UTF8; -use NLP::utilities; -use JSON; -$chinesePM = NLP::Chinese; -$romanizer = NLP::Romanizer; -$util = NLP::utilities; -%ht = (); -%pinyin_ht = (); -$lang_code = ""; -$return_chart_p = 0; -$return_offset_mappings_p = 0; -$workset_p = 0; -$cache_rom_tokens_p = 1; - -$script_data_filename = File::Spec->catfile($data_dir, "Scripts.txt"); -$unicode_data_overwrite_filename = File::Spec->catfile($data_dir, "UnicodeDataOverwrite.txt"); -$unicode_data_filename = File::Spec->catfile($data_dir, "UnicodeData.txt"); -$romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt"); -$chinese_tonal_pinyin_filename = File::Spec->catfile($data_dir, "Chinese_to_Pinyin.txt"); - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-+(l|lc|lang-code)$/) { - $lang_code = lc (shift @ARGV || "") - } elsif ($arg =~ /^-+chart$/i) { - $return_chart_p = 1; - } elsif ($arg =~ /^-+workset$/i) { - $workset_p = 1; - } elsif ($arg =~ /^-+offset[-_]*map/i) { - $return_offset_mappings_p = 1; - } elsif ($arg =~ /^-+unicode[-_]?data/i) { - $filename = shift @ARGV; - if (-r $filename) { - $unicode_data_filename = $filename; - } else { - print STDERR "Ignoring invalid UnicodeData filename $filename\n"; - } - } elsif ($arg =~ /^-+(no-tok-cach|no-cach)/i) { - $cache_rom_tokens_p = 0; - } else { - print STDERR "Ignoring unrecognized arg $arg\n"; - } -} - -$romanizer->load_script_data(*ht, $script_data_filename); -$romanizer->load_unicode_data(*ht, $unicode_data_filename); -$romanizer->load_unicode_overwrite_romanization(*ht, $unicode_data_overwrite_filename); -$romanizer->load_romanization_table(*ht, $romanization_table_filename); -$chinese_to_pinyin_not_yet_loaded_p = 1; -$current_date = $util->datetime("dateTtime"); -$lang_code_clause = ($lang_code) ? " \"lang-code\":\"$lang_code\",\n" : ""; - -print "{\n \"romanizer\":\"uroman $version (Ulf Hermjakob, USC/ISI)\",\n \"date\":\"$current_date\",\n$lang_code_clause \"romanization\": [\n" if $return_chart_p; -my $line_number = 0; -my $chart_result = ""; -while (<>) { - $line_number++; - my $line = $_; - my $snt_id = ""; - if ($workset_p) { - next if $line =~ /^#/; - if (($i_value, $s_value) = ($line =~ /^(\S+\.\d+)\s(.*)$/)) { - $snt_id = $i_value; - $line = "$s_value\n"; - } else { - next; - } - } - if ($chinese_to_pinyin_not_yet_loaded_p && $chinesePM->string_contains_utf8_cjk_unified_ideograph_p($line)) { - $chinesePM->read_chinese_tonal_pinyin_files(*pinyin_ht, $chinese_tonal_pinyin_filename); - $chinese_to_pinyin_not_yet_loaded_p = 0; - } - if ($return_chart_p) { - print $chart_result; - *chart_ht = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return chart", $line_number); - $chart_result = $romanizer->chart_to_json_romanization_elements(0, $chart_ht{N_CHARS}, *chart_ht, $line_number); - } elsif ($return_offset_mappings_p) { - ($best_romanization, $offset_mappings) = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return offset mappings", $line_number, 0); - print "::snt-id $snt_id\n" if $workset_p; - print "::orig $line"; - print "::rom $best_romanization\n"; - print "::align $offset_mappings\n\n"; - } elsif ($cache_rom_tokens_p) { - print $romanizer->romanize_by_token_with_caching($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } else { - print $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } -} -$chart_result =~ s/,(\s*)$/$1/; -print $chart_result; -print " ]\n}\n" if $return_chart_p; - -$dev_test_p = 0; -if ($dev_test_p) { - $n_suspicious_code_points = 0; - $n_instances = 0; - foreach $char_name (sort { hex($ht{UTF_NAME_TO_UNICODE}->{$a}) <=> hex($ht{UTF_NAME_TO_UNICODE}->{$b}) } - keys %{$ht{SUSPICIOUS_ROMANIZATION}}) { - $unicode_value = $ht{UTF_NAME_TO_UNICODE}->{$char_name}; - $utf8_string = $ht{UTF_NAME_TO_CODE}->{$char_name}; - foreach $romanization (sort keys %{$ht{SUSPICIOUS_ROMANIZATION}->{$char_name}}) { - $count = $ht{SUSPICIOUS_ROMANIZATION}->{$char_name}->{$romanization}; - $s = ($count == 1) ? "" : "s"; - print STDERR "*** Suspiciously lengthy romanization:\n" unless $n_suspicious_code_points; - print STDERR "::s $utf8_string ::t $romanization ::comment $char_name (U+$unicode_value)\n"; - $n_suspicious_code_points++; - $n_instances += $count; - } - } - print STDERR " *** Total of $n_suspicious_code_points suspicious code points ($n_instances instance$s)\n" if $n_suspicious_code_points; -} - -exit 0; - diff --git a/spaces/mrneuralnet/P-PD/utils/tools.py b/spaces/mrneuralnet/P-PD/utils/tools.py deleted file mode 100644 index 9e0a9ca953cf28216d0e427e343adeab34f1dc31..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-PD/utils/tools.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import cv2 -import torch -import numpy as np -from PIL import Image -from dlib import cnn_face_detection_model_v1 as face_detect_model - - -def center_crop(im, length): - w, h = im.size - left = w//2 - length//2 - right = w//2 + length//2 - top = h//2 - length//2 - bottom = h//2 + length//2 - return im.crop((left, top, right, bottom)), (left, top) - - -def remove_boundary(img): - """ - Remove boundary artifacts that FAL causes. - """ - w, h = img.size - left = w//80 - top = h//50 - right = w*79//80 - bottom = h*24//25 - return img.crop((left, top, right, bottom)) - - -def resize_shorter_side(img, min_length): - """ - Resize the shorter side of img to min_length while - preserving the aspect ratio. - """ - ow, oh = img.size - mult = 8 - if ow < oh: - if ow == min_length and oh % mult == 0: - return img, (ow, oh) - w = min_length - h = int(min_length * oh / ow) - else: - if oh == min_length and ow % mult == 0: - return img, (ow, oh) - h = min_length - w = int(min_length * ow / oh) - return img.resize((w, h), Image.BICUBIC), (w, h) - - -def flow_resize(flow, sz): - oh, ow, _ = flow.shape - w, h = sz - u_ = cv2.resize(flow[:,:,0], (w, h)) - v_ = cv2.resize(flow[:,:,1], (w, h)) - u_ *= w / float(ow) - v_ *= h / float(oh) - return np.dstack((u_,v_)) - - -def warp(im, flow, alpha=1, interp=cv2.INTER_CUBIC): - height, width, _ = flow.shape - cart = np.dstack(np.meshgrid(np.arange(width), np.arange(height))) - pixel_map = (cart + alpha * flow).astype(np.float32) - warped = cv2.remap( - im, - pixel_map[:, :, 0], - pixel_map[:, :, 1], - interp, - borderMode=cv2.BORDER_REPLICATE) - return warped - - -cnn_face_detector = None -def face_detection( - img, - verbose=False, - model_file='utils/dlib_face_detector/mmod_human_face_detector.dat'): - """ - Detects faces using dlib cnn face detection, and extend the bounding box - to include the entire face. - """ - def shrink(img, max_length=2048): - ow, oh = img.size - if max_length >= max(ow, oh): - return img, 1.0 - - if ow > oh: - mult = max_length / ow - else: - mult = max_length / oh - w = int(ow * mult) - h = int(oh * mult) - return img.resize((w, h), Image.BILINEAR), mult - - global cnn_face_detector - if cnn_face_detector is None: - cnn_face_detector = face_detect_model(model_file) - - w, h = img.size - img_shrinked, mult = shrink(img) - - im = np.asarray(img_shrinked) - if len(im.shape) != 3 or im.shape[2] != 3: - return [] - - crop_ims = [] - dets = cnn_face_detector(im, 0) - for k, d in enumerate(dets): - top = d.rect.top() / mult - bottom = d.rect.bottom() / mult - left = d.rect.left() / mult - right = d.rect.right() / mult - - wid = right - left - left = max(0, left - wid // 2.5) - top = max(0, top - wid // 1.5) - right = min(w - 1, right + wid // 2.5) - bottom = min(h - 1, bottom + wid // 2.5) - - if d.confidence > 1: - if verbose: - print("%d-th face detected: (%d, %d, %d, %d)" % - (k, left, top, right, bottom)) - crop_im = img.crop((left, top, right, bottom)) - crop_ims.append((crop_im, (left, top, right, bottom))) - - return crop_ims - - -def mkdirs(paths): - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_generate.py b/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_generate.py deleted file mode 100644 index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/noisychannel/rerank_generate.py +++ /dev/null @@ -1,397 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Generate n-best translations using a trained model. -""" - -import os -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate, preprocess - -from examples.noisychannel import rerank_options, rerank_utils - - -def gen_and_reprocess_nbest(args): - if args.score_dict_dir is None: - args.score_dict_dir = args.data - if args.prefix_len is not None: - assert ( - args.right_to_left1 is False - ), "prefix length not compatible with right to left models" - assert ( - args.right_to_left2 is False - ), "prefix length not compatible with right to left models" - - if args.nbest_list is not None: - assert args.score_model2 is None - - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - store_data = ( - os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name - ) - if not os.path.exists(store_data): - os.makedirs(store_data) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - assert not ( - args.right_to_left1 and args.backwards1 - ), "backwards right to left not supported" - assert not ( - args.right_to_left2 and args.backwards2 - ), "backwards right to left not supported" - assert not ( - args.prefix_len is not None and args.target_prefix_frac is not None - ), "target prefix frac and target prefix len incompatible" - - # make directory to store generation results - if not os.path.exists(pre_gen): - os.makedirs(pre_gen) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - if args.nbest_list is not None: - rerank2_is_gen = True - - # make directories to store preprossed nbest list for reranking - if not os.path.exists(left_to_right_preprocessed_dir): - os.makedirs(left_to_right_preprocessed_dir) - if not os.path.exists(right_to_left_preprocessed_dir): - os.makedirs(right_to_left_preprocessed_dir) - if not os.path.exists(lm_preprocessed_dir): - os.makedirs(lm_preprocessed_dir) - if not os.path.exists(backwards_preprocessed_dir): - os.makedirs(backwards_preprocessed_dir) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - - using_nbest = args.nbest_list is not None - - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - else: - if not os.path.isfile(predictions_bpe_file): - print("STEP 1: generate predictions using the p(T|S) model with bpe") - print(args.data) - param1 = [ - args.data, - "--path", - args.gen_model, - "--shard-id", - str(args.shard_id), - "--num-shards", - str(args.num_shards), - "--nbest", - str(args.num_rescore), - "--batch-size", - str(args.batch_size), - "--beam", - str(args.num_rescore), - "--batch-size", - str(args.num_rescore), - "--gen-subset", - args.gen_subset, - "--source-lang", - args.source_lang, - "--target-lang", - args.target_lang, - ] - if args.sampling: - param1 += ["--sampling"] - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, param1) - - print(input_args) - with open(predictions_bpe_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/source_gen_bpe." + args.source_lang, - pre_gen + "/target_gen_bpe." + args.target_lang, - pre_gen + "/reference_gen_bpe." + args.target_lang, - ) - bitext_bpe = args.rescore_bpe_code - bpe_src_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/source_gen_bpe." + args.source_lang, - "--output", - pre_gen + "/rescore_data." + args.source_lang, - ] - bpe_tgt_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/target_gen_bpe." + args.target_lang, - "--output", - pre_gen + "/rescore_data." + args.target_lang, - ] - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_tgt_param, - shell=False, - ) - - if (not os.path.isfile(score1_file) and not rerank1_is_gen) or ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print( - "STEP 2: process the output of generate.py so we have clean text files with the translations" - ) - - rescore_file = "/rescore_data" - if args.prefix_len is not None: - prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len) - if args.target_prefix_frac is not None: - target_prefix_frac_rescore_file = ( - rescore_file + "target_prefix_frac" + str(args.target_prefix_frac) - ) - if args.source_prefix_frac is not None: - source_prefix_frac_rescore_file = ( - rescore_file + "source_prefix_frac" + str(args.source_prefix_frac) - ) - - if not args.right_to_left1 or not args.right_to_left2: - if not args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + rescore_file + "." + args.source_lang, - pre_gen + rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - ) - if args.prefix_len is not None: - bw_rescore_file = prefix_len_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + prefix_len_rescore_file + "." + args.source_lang, - pre_gen + prefix_len_rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - prefix_len=args.prefix_len, - bpe_symbol=args.post_process, - ) - elif args.target_prefix_frac is not None: - bw_rescore_file = target_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - target_prefix_frac=args.target_prefix_frac, - ) - else: - bw_rescore_file = rescore_file - - if args.source_prefix_frac is not None: - fw_rescore_file = source_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - source_prefix_frac=args.source_prefix_frac, - ) - else: - fw_rescore_file = rescore_file - - if args.right_to_left1 or args.right_to_left2: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + "/right_to_left_rescore_data." + args.source_lang, - pre_gen + "/right_to_left_rescore_data." + args.target_lang, - pre_gen + "/right_to_left_reference_file", - right_to_left=True, - bpe_symbol=args.post_process, - ) - - print("STEP 3: binarize the translations") - if ( - not args.right_to_left1 - or args.score_model2 is not None - and not args.right_to_left2 - or not rerank1_is_gen - ): - - if args.backwards1 or args.backwards2: - if args.backwards_score_dict_dir is not None: - bw_dict = args.backwards_score_dict_dir - else: - bw_dict = args.score_dict_dir - bw_preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + bw_rescore_file, - "--srcdict", - bw_dict + "/dict." + scorer1_src + ".txt", - "--tgtdict", - bw_dict + "/dict." + scorer1_tgt + ".txt", - "--destdir", - backwards_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(bw_preprocess_param) - preprocess.main(input_args) - - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + fw_rescore_file, - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - left_to_right_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - if args.right_to_left1 or args.right_to_left2: - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + "/right_to_left_rescore_data", - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - right_to_left_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - return gen_output - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - gen_and_reprocess_nbest(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/shuffled_word_order/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/shuffled_word_order/README.md deleted file mode 100644 index f20483849a8ca33bf349b57882a79155ba593bf1..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/shuffled_word_order/README.md +++ /dev/null @@ -1,84 +0,0 @@ -# Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little - -[https://arxiv.org/abs/2104.06644](https://arxiv.org/abs/2104.06644) - -## Introduction - -In this work, we pre-train [RoBERTa](../roberta) base on various word shuffled variants of BookWiki corpus (16GB). We observe that a word shuffled pre-trained model achieves surprisingly good scores on GLUE, PAWS and several parametric probing tasks. Please read our paper for more details on the experiments. - -## Pre-trained models - -| Model | Description | Download | -| ------------------------------------- | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | -| `roberta.base.orig` | RoBERTa (base) trained on natural corpus | [roberta.base.orig.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.tar.gz) | -| `roberta.base.shuffle.n1` | RoBERTa (base) trained on n=1 gram sentence word shuffled data | [roberta.base.shuffle.n1.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz) | -| `roberta.base.shuffle.n2` | RoBERTa (base) trained on n=2 gram sentence word shuffled data | [roberta.base.shuffle.n2.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.tar.gz) | -| `roberta.base.shuffle.n3` | RoBERTa (base) trained on n=3 gram sentence word shuffled data | [roberta.base.shuffle.n3.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.tar.gz) | -| `roberta.base.shuffle.n4` | RoBERTa (base) trained on n=4 gram sentence word shuffled data | [roberta.base.shuffle.n4.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.tar.gz) | -| `roberta.base.shuffle.512` | RoBERTa (base) trained on unigram 512 word block shuffled data | [roberta.base.shuffle.512.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.tar.gz) | -| `roberta.base.shuffle.corpus` | RoBERTa (base) trained on unigram corpus word shuffled data | [roberta.base.shuffle.corpus.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.tar.gz) | -| `roberta.base.shuffle.corpus_uniform` | RoBERTa (base) trained on unigram corpus word shuffled data, where all words are uniformly sampled | [roberta.base.shuffle.corpus_uniform.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.tar.gz) | -| `roberta.base.nopos` | RoBERTa (base) without positional embeddings, trained on natural corpus | [roberta.base.nopos.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.nopos.tar.gz) | - -## Results - -[GLUE (Wang et al, 2019)](https://gluebenchmark.com/) & [PAWS (Zhang et al, 2019)](https://github.com/google-research-datasets/paws) _(dev set, single model, single-task fine-tuning, median of 5 seeds)_ - -| name | CoLA | MNLI | MRPC | PAWS | QNLI | QQP | RTE | SST-2 | -| :----------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| `roberta.base.orig` | 61.4 | 86.11 | 89.19 | 94.46 | 92.53 | 91.26 | 74.64 | 93.92 | -| `roberta.base.shuffle.n1` | 35.15 | 82.64 | 86 | 89.97 | 89.02 | 91.01 | 69.02 | 90.47 | -| `roberta.base.shuffle.n2` | 54.37 | 83.43 | 86.24 | 93.46 | 90.44 | 91.36 | 70.83 | 91.79 | -| `roberta.base.shuffle.n3` | 48.72 | 83.85 | 86.36 | 94.05 | 91.69 | 91.24 | 70.65 | 92.02 | -| `roberta.base.shuffle.n4` | 58.64 | 83.77 | 86.98 | 94.32 | 91.69 | 91.4 | 70.83 | 92.48 | -| `roberta.base.shuffle.512` | 12.76 | 77.52 | 79.61 | 84.77 | 85.19 | 90.2 | 56.52 | 86.34 | -| `roberta.base.shuffle.corpus` | 0 | 71.9 | 70.52 | 58.52 | 71.11 | 85.52 | 53.99 | 83.35 | -| `roberta.base.shuffle.corpus_random` | 9.19 | 72.33 | 70.76 | 58.42 | 77.76 | 85.93 | 53.99 | 84.04 | -| `roberta.base.nopos` | 0 | 63.5 | 72.73 | 57.08 | 77.72 | 87.87 | 54.35 | 83.24 | - -For more results on probing tasks, please refer to [our paper](https://arxiv.org/abs/2104.06644). - -## Example Usage - -Follow the same usage as in [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) to load and test your models: - -```python -# Download roberta.base.shuffle.n1 model -wget https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz -tar -xzvf roberta.base.shuffle.n1.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import RoBERTaModel -roberta = RoBERTaModel.from_pretrained('/path/to/roberta.base.shuffle.n1', checkpoint_file='model.pt') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -**Note**: The model trained without positional embeddings (`roberta.base.nopos`) is a modified `RoBERTa` model, where the positional embeddings are not used. Thus, the typical `from_pretrained` method on fairseq version of RoBERTa will not be able to load the above model weights. To do so, construct a new `RoBERTaModel` object by setting the flag `use_positional_embeddings` to `False` (or [in the latest code](https://github.com/pytorch/fairseq/blob/main/fairseq/models/roberta/model.py#L543), set `no_token_positional_embeddings` to `True`), and then load the individual weights. - -## Fine-tuning Evaluation - -We provide the trained fine-tuned models on MNLI here for each model above for quick evaluation (1 seed for each model). Please refer to [finetuning details](README.finetuning.md) for the parameters of these models. Follow [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) instructions to evaluate these models. - -| Model | MNLI M Dev Accuracy | Link | -| :----------------------------------------- | :------------------ | :--------------------------------------------------------------------------------------------------------------- | -| `roberta.base.orig.mnli` | 86.14 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.mnli.tar.gz) | -| `roberta.base.shuffle.n1.mnli` | 82.55 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.mnli.tar.gz) | -| `roberta.base.shuffle.n2.mnli` | 83.21 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.mnli.tar.gz) | -| `roberta.base.shuffle.n3.mnli` | 83.89 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.mnli.tar.gz) | -| `roberta.base.shuffle.n4.mnli` | 84.00 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.mnli.tar.gz) | -| `roberta.base.shuffle.512.mnli` | 77.22 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.mnli.tar.gz) | -| `roberta.base.shuffle.corpus.mnli` | 71.88 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.mnli.tar.gz) | -| `roberta.base.shuffle.corpus_uniform.mnli` | 72.46 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.mnli.tar.gz) | - -## Citation - -```bibtex -@misc{sinha2021masked, - title={Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little}, - author={Koustuv Sinha and Robin Jia and Dieuwke Hupkes and Joelle Pineau and Adina Williams and Douwe Kiela}, - year={2021}, - eprint={2104.06644}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/spaces/mshukor/UnIVAL/models/taming/modules/losses/segmentation.py b/spaces/mshukor/UnIVAL/models/taming/modules/losses/segmentation.py deleted file mode 100644 index 4ba77deb5159a6307ed2acba9945e4764a4ff0a5..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/taming/modules/losses/segmentation.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - - -class BCELoss(nn.Module): - def forward(self, prediction, target): - loss = F.binary_cross_entropy_with_logits(prediction,target) - return loss, {} - - -class BCELossWithQuant(nn.Module): - def __init__(self, codebook_weight=1.): - super().__init__() - self.codebook_weight = codebook_weight - - def forward(self, qloss, target, prediction, split): - bce_loss = F.binary_cross_entropy_with_logits(prediction,target) - loss = bce_loss + self.codebook_weight*qloss - return loss, {"{}/total_loss".format(split): loss.clone().detach().mean(), - "{}/bce_loss".format(split): bce_loss.detach().mean(), - "{}/quant_loss".format(split): qloss.detach().mean() - } diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/datasets/__init__.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/datasets/__init__.py deleted file mode 100644 index cca9ef494a2d44f0e027cf63edf8c5c6f0357394..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/data/datasets/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .dataset_simple_2d import * -from .dataset_simple_3d import * \ No newline at end of file diff --git a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app.py b/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app.py deleted file mode 100644 index 34b6115943e0e0229825bcf632071bfb1fbf5a37..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/Tune-A-Video-Training-UI-poli/app.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python -from __future__ import annotations - -import os -from subprocess import getoutput - -import gradio as gr -import torch - -from app_inference import create_inference_demo -from app_training import create_training_demo -from app_upload import create_upload_demo -from inference import InferencePipeline -from trainer import Trainer - -TITLE = '# [Tune-A-Video](https://tuneavideo.github.io/) UI' - -ORIGINAL_SPACE_ID = 'multimodalart/Tune-A-Video-Training-UI-poli' -SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID) -GPU_DATA = getoutput('nvidia-smi') -SHARED_UI_WARNING = f'''## Attention - Training doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU. - -
            Duplicate Space
            -''' - -if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID: - SETTINGS = f'Settings' -else: - SETTINGS = 'Settings' - -INVALID_GPU_WARNING = f'''## Attention - the specified GPU is invalid. Training may not work. Make sure you have selected a `T4 GPU` for this task.''' - -CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU. -
            -You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces. -You can use "T4 small/medium" to run this demo. -
            -''' - -HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run. -
            -You can check and create your Hugging Face tokens here. -You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab. -
            -''' - - - -HF_TOKEN = os.getenv('HF_TOKEN') - - -def show_warning(warning_text: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown(warning_text) - return demo - - -pipe = InferencePipeline(HF_TOKEN) -trainer = Trainer(HF_TOKEN) - -with gr.Blocks(css='style.css') as demo: - if SPACE_ID == ORIGINAL_SPACE_ID: - show_warning(SHARED_UI_WARNING) - elif not torch.cuda.is_available(): - show_warning(CUDA_NOT_AVAILABLE_WARNING) - elif(not "T4" in GPU_DATA): - show_warning(INVALID_GPU_WARNING) - - - gr.Markdown(TITLE) - with gr.Tabs(): - with gr.TabItem('Train'): - create_training_demo(trainer, pipe) - with gr.TabItem('Run'): - create_inference_demo(pipe, HF_TOKEN) - with gr.TabItem('Upload'): - gr.Markdown(''' - - You can use this tab to upload models later if you choose not to upload models in training time or if upload in training time failed. - ''') - create_upload_demo(HF_TOKEN) - - if not HF_TOKEN: - show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING) - -demo.queue(max_size=1).launch(share=False) \ No newline at end of file diff --git a/spaces/mygyasir/invisiblecat-junior-diffusion/app.py b/spaces/mygyasir/invisiblecat-junior-diffusion/app.py deleted file mode 100644 index afae3fbacc17906cb51cf1e2a2e39244c0093b78..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/invisiblecat-junior-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/invisiblecat/junior-diffusion").launch() \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py deleted file mode 100644 index 8a6e6d816955b4c6097e1de6ce6e4ed3bafe327c..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/predictor.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from segment_anything.modeling import Sam - -from typing import Optional, Tuple - -from .utils.transforms import ResizeLongestSide - - -class SamPredictor: - def __init__( - self, - sam_model: Sam, - ) -> None: - """ - Uses SAM to calculate the image embedding for an image, and then - allow repeated, efficient mask prediction given prompts. - - Arguments: - sam_model (Sam): The model to use for mask prediction. - """ - super().__init__() - self.model = sam_model - self.transform = ResizeLongestSide(sam_model.image_encoder.img_size) - self.reset_image() - - def set_image( - self, - image: np.ndarray, - image_format: str = "RGB", - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. - - Arguments: - image (np.ndarray): The image for calculating masks. Expects an - image in HWC uint8 format, with pixel values in [0, 255]. - image_format (str): The color format of the image, in ['RGB', 'BGR']. - """ - assert image_format in [ - "RGB", - "BGR", - ], f"image_format must be in ['RGB', 'BGR'], is {image_format}." - if image_format != self.model.image_format: - image = image[..., ::-1] - - # Transform the image to the form expected by the model - input_image = self.transform.apply_image(image) - input_image_torch = torch.as_tensor(input_image, device=self.device) - input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :] - - self.set_torch_image(input_image_torch, image.shape[:2]) - - @torch.no_grad() - def set_torch_image( - self, - transformed_image: torch.Tensor, - original_image_size: Tuple[int, ...], - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. Expects the input - image to be already transformed to the format expected by the model. - - Arguments: - transformed_image (torch.Tensor): The input image, with shape - 1x3xHxW, which has been transformed with ResizeLongestSide. - original_image_size (tuple(int, int)): The size of the image - before transformation, in (H, W) format. - """ - assert ( - len(transformed_image.shape) == 4 - and transformed_image.shape[1] == 3 - and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size - ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}." - self.reset_image() - - self.original_size = original_image_size - self.input_size = tuple(transformed_image.shape[-2:]) - input_image = self.model.preprocess(transformed_image) - self.features = self.model.image_encoder(input_image) - self.is_image_set = True - - def predict( - self, - point_coords: Optional[np.ndarray] = None, - point_labels: Optional[np.ndarray] = None, - box: Optional[np.ndarray] = None, - mask_input: Optional[np.ndarray] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: - """ - Predict masks for the given input prompts, using the currently set image. - - Arguments: - point_coords (np.ndarray or None): A Nx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (np.ndarray or None): A length N array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A length 4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form 1xHxW, where - for SAM, H=W=256. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (np.ndarray): The output masks in CxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (np.ndarray): An array of length C containing the model's - predictions for the quality of each mask. - (np.ndarray): An array of shape CxHxW, where C is the number - of masks and H=W=256. These low resolution logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - # Transform input prompts - coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None - if point_coords is not None: - assert ( - point_labels is not None - ), "point_labels must be supplied if point_coords is supplied." - point_coords = self.transform.apply_coords(point_coords, self.original_size) - coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device) - labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) - coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :] - if box is not None: - box = self.transform.apply_boxes(box, self.original_size) - box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device) - box_torch = box_torch[None, :] - if mask_input is not None: - mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device) - mask_input_torch = mask_input_torch[None, :, :, :] - - masks, iou_predictions, low_res_masks = self.predict_torch( - coords_torch, - labels_torch, - box_torch, - mask_input_torch, - multimask_output, - return_logits=return_logits, - ) - - masks_np = masks[0].detach().cpu().numpy() - iou_predictions_np = iou_predictions[0].detach().cpu().numpy() - low_res_masks_np = low_res_masks[0].detach().cpu().numpy() - return masks_np, iou_predictions_np, low_res_masks_np - - @torch.no_grad() - def predict_torch( - self, - point_coords: Optional[torch.Tensor], - point_labels: Optional[torch.Tensor], - boxes: Optional[torch.Tensor] = None, - mask_input: Optional[torch.Tensor] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - Input prompts are batched torch tensors and are expected to already be - transformed to the input frame using ResizeLongestSide. - - Arguments: - point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (torch.Tensor or None): A BxN array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - boxes (np.ndarray or None): A Bx4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form Bx1xHxW, where - for SAM, H=W=256. Masks returned by a previous iteration of the - predict method do not need further transformation. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (torch.Tensor): The output masks in BxCxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (torch.Tensor): An array of shape BxC containing the model's - predictions for the quality of each mask. - (torch.Tensor): An array of shape BxCxHxW, where C is the number - of masks and H=W=256. These low res logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - if point_coords is not None: - points = (point_coords, point_labels) - else: - points = None - - # Embed prompts - sparse_embeddings, dense_embeddings = self.model.prompt_encoder( - points=points, - boxes=boxes, - masks=mask_input, - ) - - # Predict masks - low_res_masks, iou_predictions = self.model.mask_decoder( - image_embeddings=self.features, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - - # Upscale the masks to the original image resolution - masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size) - - if not return_logits: - masks = masks > self.model.mask_threshold - - return masks, iou_predictions, low_res_masks - - def get_image_embedding(self) -> torch.Tensor: - """ - Returns the image embeddings for the currently set image, with - shape 1xCxHxW, where C is the embedding dimension and (H,W) are - the embedding spatial dimension of SAM (typically C=256, H=W=64). - """ - if not self.is_image_set: - raise RuntimeError( - "An image must be set with .set_image(...) to generate an embedding." - ) - assert self.features is not None, "Features must exist if an image has been set." - return self.features - - @property - def device(self) -> torch.device: - return self.model.device - - def reset_image(self) -> None: - """Resets the currently set image.""" - self.is_image_set = False - self.features = None - self.orig_h = None - self.orig_w = None - self.input_h = None - self.input_w = None diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/modules/__init__.py b/spaces/nakas/MusicGenDemucs/audiocraft/modules/__init__.py deleted file mode 100644 index 81ba30f6466ff91b90490a4fb92f7d3d0d00144d..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/modules/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder diff --git a/spaces/nateraw/lavila/main_infer_narrator.py b/spaces/nateraw/lavila/main_infer_narrator.py deleted file mode 100644 index 5a5dcb6796efbcf4ab80724a42f49494a3150959..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/main_infer_narrator.py +++ /dev/null @@ -1,257 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -from collections import OrderedDict -import os -import os.path as osp -import pickle -import time - -import torch -import torchvision.transforms as transforms -import torchvision.transforms._transforms_video as transforms_video - -from lavila.data import datasets -from lavila.data.video_transforms import Permute -from lavila.models import models -from lavila.utils.preprocess import generate_tokenizer -from lavila.utils import distributed as dist_utils -from eval_narrator import decode_one - - -class IndexedDataset(torch.utils.data.Dataset): - def __init__(self, dataset): - self.dataset = dataset - - def __getitem__(self, index): - return index, self.dataset[index] - - def __len__(self): - return len(self.dataset) - - -def get_args_parser(): - parser = argparse.ArgumentParser(description='lavila infer narrator', add_help=False) - parser.add_argument('--dataset', default='ego4d', type=str, choices=['ego4d']) - parser.add_argument('--root', - default='datasets/Ego4D/video_5min_chunks_288px/', - type=str, help='path to dataset root') - parser.add_argument('--metadata', - default='datasets/Ego4D/ego4d_train.pkl', - type=str, help='path to metadata file') - parser.add_argument('--output-dir', default='./', type=str, help='output dir') - parser.add_argument('--batch-size', default=64, type=int) - parser.add_argument('--use-half', action='store_true') - parser.add_argument('--clip-length', default=4, type=int, help='clip length') - parser.add_argument('--clip-stride', default=16, type=int, help='clip stride') - parser.add_argument('--resume', default='', type=str, help='path to latest checkpoint') - parser.add_argument('--caption-sample', default='multinomial_sample', - choices=['multinomial_sample', 'beam_sample', 'group_beam_search']) - parser.add_argument('--caption-top-k', default=None, type=int) - parser.add_argument('--caption-top-p', default=0.95, type=float) - parser.add_argument('--caption-num-beams', default=1, type=int) - parser.add_argument('--caption-num-beam-groups', default=1, type=int) - parser.add_argument('--caption-temperature', default=0.7, type=float) - parser.add_argument('--caption-length-penalty', default=1.0, type=float) - parser.add_argument('--caption-num-return-sequences', default=10, type=int) - parser.add_argument('--caption-max-len', default=77, type=int) - parser.add_argument('--caption-early-stop', action='store_true', help='early stopping to save computation') - # System - parser.add_argument('--print-freq', default=10, type=int, help='print frequency') - parser.add_argument('-j', '--workers', default=10, type=int, metavar='N', - help='number of data loading workers per process') - parser.add_argument('--world-size', default=1, type=int, - help='number of nodes for distributed training') - parser.add_argument('--rank', default=0, type=int, - help='node rank for distributed training') - parser.add_argument("--local_rank", type=int, default=0) - parser.add_argument('--dist-url', default='env://', type=str, - help='url used to set up distributed training') - parser.add_argument('--dist-backend', default='nccl', type=str) - parser.add_argument('--gpu', default=None, type=int, help='GPU id to use.') - return parser - - -def main(args): - dist_utils.init_distributed_mode(args) - print(args) - - if args.resume: - ckpt_path = args.resume - elif osp.isfile(osp.join(args.output_dir, 'checkpoint_best.pt')): - ckpt_path = osp.join(args.output_dir, 'checkpoint_best.pt') - else: - raise Exception('no checkpoint found') - - ckpt = torch.load(ckpt_path, map_location='cpu') - state_dict = OrderedDict() - for k, v in ckpt['state_dict'].items(): - state_dict[k.replace('module.', '')] = v - - # create model - old_args = ckpt['args'] - print('=> creating model: {}'.format(old_args.model)) - model = getattr(models, old_args.model)( - text_use_cls_token=old_args.use_cls_token, - gated_xattn=old_args.gated_xattn, - timesformer_gated_xattn=old_args.timesformer_gated_xattn, - num_frames=old_args.clip_length, - drop_path_rate=0, - ) - model.cuda() - model.load_state_dict(state_dict, strict=True) - print("=> loaded resume checkpoint '{}' (epoch {})".format(args.resume, ckpt['epoch'])) - - torch.backends.cudnn.benchmark = True - - # Data loading - print("=> creating dataset") - tokenizer = generate_tokenizer(old_args.model) - - crop_size = 224 if '336PX' not in old_args.model else 336 - val_transform = transforms.Compose([ - Permute([3, 0, 1, 2]), # T H W C -> C T H W - transforms.Resize(crop_size), - transforms.CenterCrop(crop_size), - (transforms_video.NormalizeVideo(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375]) if 'OPENAI' not in old_args.model else - transforms_video.NormalizeVideo(mean=[108.3272985, 116.7460125, 104.09373615000001], std=[68.5005327, 66.6321579, 70.32316305])), - ]) - - val_dataset = datasets.VideoCaptionDatasetCLIP( - args.dataset, - args.root, - args.metadata, - transform=val_transform, - is_training=False, - tokenizer=tokenizer, - clip_length=args.clip_length, - clip_stride=args.clip_stride, - sparse_sample=False, - subsample_stride=1, - ) - val_dataset = IndexedDataset(val_dataset) - - print(len(val_dataset)) - - if args.distributed: - val_sampler = torch.utils.data.distributed.DistributedSampler(val_dataset, shuffle=False) - else: - val_sampler = None - - val_loader = torch.utils.data.DataLoader( - val_dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=args.workers, pin_memory=True, sampler=val_sampler, drop_last=False - ) - print('len(val_loader) = {}'.format(len(val_loader))) - - model.eval() - if args.use_half: - model.half() - - id_offset = 0 - all_captions_cache = [] - end = time.time() - with torch.no_grad(): - for data_iter, (indices, inputs) in enumerate(val_loader): - indices = indices.tolist() - if data_iter % args.print_freq == 0: - print("finished {}/{} in {}".format(data_iter, len(val_loader), time.time() - end)) - end = time.time() - if len(inputs) == 2 or len(inputs) == 3: - images = inputs[0].cuda(non_blocking=True) - if args.use_half: - images = images.half() - - image_features = dist_utils.get_model(model).encode_image(images) - if not isinstance(image_features, (list, tuple)): - image_tokens = image_features - else: - image_tokens = image_features[1] - if args.caption_sample == 'multinomial_sample': - generated_text_ids, ppls = dist_utils.get_model(model).generate( - image_tokens, - tokenizer, - target=None, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - num_return_sequences=args.caption_num_return_sequences, - temperature=args.caption_temperature, - early_stopping=args.caption_early_stop, - ) - elif args.caption_sample == 'beam_sample': - generated_text_ids, ppls = dist_utils.get_model(model).beam_sample( - image_tokens, - tokenizer, - target=None, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - temperature=args.caption_temperature, - length_penalty=args.caption_length_penalty, - num_beams=args.caption_num_beams, - num_return_sequences=args.caption_num_return_sequences, - ) - elif args.caption_sample == 'group_beam_search': - assert args.caption_num_beam_groups > 1 and args.caption_num_beams % args.caption_num_beam_groups == 0 - generated_text_ids, ppls = dist_utils.get_model(model).group_beam_search( - image_tokens, - tokenizer, - target=None, - max_text_length=args.caption_max_len, - top_k=args.caption_top_k, - top_p=args.caption_top_p, - temperature=args.caption_temperature, - length_penalty=args.caption_length_penalty, - num_beams=args.caption_num_beams, - num_beam_groups=args.caption_num_beam_groups, - num_return_sequences=args.caption_num_return_sequences, - ) - for j in range(generated_text_ids.shape[0] // args.caption_num_return_sequences): - generated_text_str_list = [] - ppls_list = [] - for k in range(args.caption_num_return_sequences): - jj = j * args.caption_num_return_sequences + k - generated_text_str = decode_one(generated_text_ids[jj], tokenizer) - generated_text_str_list.append(generated_text_str) - ppls_list.append(ppls[jj].item()) - video_uid, t_start, t_end, _ = val_loader.dataset.dataset.samples[indices[j]] - if args.caption_num_return_sequences == 1: - all_captions_cache.append((video_uid, t_start, t_end, generated_text_str, ppls[jj].item())) - else: - all_captions_cache.append((video_uid, t_start, t_end, generated_text_str_list, ppls_list)) - id_offset += generated_text_ids.shape[0] - - pickle.dump(all_captions_cache, open(osp.join(args.output_dir, 'cache.{}.pkl'.format(args.rank)), 'wb')) - - torch.distributed.barrier() - disorded_list = [] - total_num = 0 - if args.rank == 0: - for i in range(args.world_size): - print('=> reading {}'.format(osp.join(args.output_dir, f'cache.{i}.pkl'))) - sublist = pickle.load(open(osp.join(args.output_dir, f'cache.{i}.pkl'), 'rb')) - disorded_list.append(sublist) - total_num += len(sublist) - ordered_list = [] - for i in range(total_num): - ordered_list.append(disorded_list[i % args.world_size][i // args.world_size]) - print(f"{len(val_dataset)}/{len(ordered_list)}") - ordered_list = ordered_list[:len(val_dataset)] - pickle.dump(ordered_list, open(osp.join(args.output_dir, 'total.pkl'), 'wb')) - for i in range(args.world_size): - print('=> deleting {}'.format(osp.join(args.output_dir, f'cache.{i}.pkl'))) - os.remove(osp.join(args.output_dir, f'cache.{i}.pkl')) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser('lavila infer narrator', parents=[get_args_parser()]) - args = parser.parse_args() - main(args) diff --git a/spaces/nateraw/lavila/run_with_submitit_finetune_classification.py b/spaces/nateraw/lavila/run_with_submitit_finetune_classification.py deleted file mode 100644 index 25d10df9b2720bf0961e2fe1c6e95fca7983aebb..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/run_with_submitit_finetune_classification.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -A script to run multinode training with submitit. -""" -import argparse -import os -import uuid -from pathlib import Path - -import main_finetune_classification as main_finetune -import submitit - - -def parse_args(): - parser = main_finetune.get_args_parser() - parser = argparse.ArgumentParser("Submitit for lavila fine-tuning", parents=[parser]) - parser.add_argument("--ngpus", default=8, type=int, help="Number of gpus to request on each node") - parser.add_argument("--nodes", default=8, type=int, help="Number of nodes to request") - parser.add_argument("--timeout", default=2880, type=int, help="Duration of the job") - parser.add_argument("--job_dir", default="", type=str, help="Job dir. Leave empty for automatic.") - - parser.add_argument("--partition", default="learnlab", type=str, help="Partition where to submit") - parser.add_argument("--use_volta32", action='store_true', help="Big models? Use this") - parser.add_argument('--comment', default="", type=str, - help='Comment to pass to scheduler, e.g. priority message') - return parser.parse_args() - - -def get_shared_folder() -> Path: - user = os.getenv("USER") - if Path("/checkpoint/").is_dir(): - p = Path(f"/checkpoint/{user}/experiments/lavila_ft") - p.mkdir(exist_ok=True) - return p - raise RuntimeError("No shared folder available") - - -def get_init_file(): - # Init file must not exist, but it's parent dir must exist. - os.makedirs(str(get_shared_folder()), exist_ok=True) - init_file = get_shared_folder() / f"{uuid.uuid4().hex}_init" - if init_file.exists(): - os.remove(str(init_file)) - return init_file - - -class Trainer(object): - def __init__(self, args): - self.args = args - - def __call__(self): - import main_finetune_classification as main_finetune - - self._setup_gpu_args() - main_finetune.main(self.args) - - def checkpoint(self): - import submitit - - self.args.dist_url = get_init_file().as_uri() - print("Requeuing ", self.args) - empty_trainer = type(self)(self.args) - return submitit.helpers.DelayedSubmission(empty_trainer) - - def _setup_gpu_args(self): - import submitit - from pathlib import Path - - job_env = submitit.JobEnvironment() - self.args.output_dir = Path(str(self.args.output_dir).replace("%j", str(job_env.job_id))) - self.args.gpu = job_env.local_rank - self.args.rank = job_env.global_rank - self.args.world_size = job_env.num_tasks - print(f"Process group: {job_env.num_tasks} tasks, rank: {job_env.global_rank}") - - -def main(): - args = parse_args() - if args.job_dir == "": - args.job_dir = get_shared_folder() / "%j" - - # Note that the folder will depend on the job_id, to easily track experiments - executor = submitit.AutoExecutor(folder=args.job_dir, slurm_max_num_timeout=30) - - num_gpus_per_node = args.ngpus - nodes = args.nodes - timeout_min = args.timeout - - partition = args.partition - kwargs = {} - if args.use_volta32: - kwargs['slurm_constraint'] = 'volta32gb' - if args.comment: - kwargs['slurm_comment'] = args.comment - - executor.update_parameters( - mem_gb=40 * num_gpus_per_node, - gpus_per_node=num_gpus_per_node, - tasks_per_node=num_gpus_per_node, # one task per GPU - cpus_per_task=10, - nodes=nodes, - timeout_min=timeout_min, # max is 60 * 72 - # Below are cluster dependent parameters - slurm_partition=partition, - slurm_signal_delay_s=120, - **kwargs - ) - - executor.update_parameters(name="lavila_ft") - - args.dist_url = get_init_file().as_uri() - args.output_dir = args.job_dir - - trainer = Trainer(args) - job = executor.submit(trainer) - - print("Submitted job_id:", job.job_id) - - -if __name__ == "__main__": - main() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Scaricare AutoCAD Map 3D 2016 Codice Di Attivazione 64 Bits IT __HOT__.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Scaricare AutoCAD Map 3D 2016 Codice Di Attivazione 64 Bits IT __HOT__.md deleted file mode 100644 index bc04354c8661fa725e366a365994527e23d92d44..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Scaricare AutoCAD Map 3D 2016 Codice Di Attivazione 64 Bits IT __HOT__.md +++ /dev/null @@ -1,37 +0,0 @@ - -

            Come Scaricare e Attivare AutoCAD Map 3D 2016 per Windows 64 Bit

            - -

            AutoCAD Map 3D 2016 è un software di progettazione e analisi geospaziale che consente di creare e gestire mappe, dati e modelli 3D. Con AutoCAD Map 3D 2016 è possibile integrare dati provenienti da diverse fonti, visualizzare e analizzare le informazioni geografiche, automatizzare i flussi di lavoro e condividere i risultati con altri utenti.

            -

            Scaricare AutoCAD Map 3D 2016 Codice Di Attivazione 64 Bits IT


            Download File ☆☆☆ https://urlcod.com/2uIbz6



            - -

            Per scaricare e attivare AutoCAD Map 3D 2016 per Windows 64 bit è necessario seguire questi passi:

            - -
              -
            1. Accedere al sito ufficiale di Autodesk e creare un account o effettuare il login se si dispone già di uno.
            2. -
            3. Andare alla pagina di download di AutoCAD Map 3D 2016 e selezionare la versione per Windows 64 bit.
            4. -
            5. Cliccare su "Scarica ora" e scegliere il metodo di download preferito (browser, gestore di download o installazione da rete).
            6. -
            7. Seguire le istruzioni per completare il download del file di installazione.
            8. -
            9. Eseguire il file di installazione e seguire la procedura guidata per installare il software sul proprio computer.
            10. -
            11. Al termine dell'installazione, avviare il software e cliccare su "Attiva" nella schermata iniziale.
            12. -
            13. Inserire il codice prodotto (129H1) e il codice di attivazione che si riceve via email dopo aver registrato il software sul sito di Autodesk.
            14. -
            15. Cliccare su "Avanti" e seguire le istruzioni per completare l'attivazione del software.
            16. -
            - -

            A questo punto si può iniziare a usare AutoCAD Map 3D 2016 per i propri progetti geospaziali. Per maggiori informazioni sulle funzionalità e le novità del software, si può consultare il sito ufficiale di Autodesk o la guida in linea.

            - -

            Come Ottimizzare il SEO e il Formato HTML del Titolo e dell'Articolo

            - -

            Per rendere il titolo e l'articolo più efficaci dal punto di vista del SEO (Search Engine Optimization) e del formato HTML, si possono seguire questi consigli:

            - -
              -
            • Usare la parola chiave principale (Scaricare AutoCAD Map 3D 2016 Codice Di Attivazione 64 Bits IT) nel titolo, nell'introduzione e nella conclusione dell'articolo, cercando di posizionarla all'inizio o vicino all'inizio delle frasi.
            • -
            • Usare parole chiave secondarie o correlate (come AutoCAD Map 3D, progettazione geospaziale, Autodesk, Windows 64 bit) nel corpo dell'articolo, distribuendole in modo naturale e coerente con il testo.
            • -
            • Usare i tag HTML appropriati per strutturare il titolo (h1), i sottotitoli (h2), i paragrafi (p), le liste ordinate (ol) e non ordinate (ul) e i collegamenti ipertestuali (a).
            • -
            • Usare attributi alt per le immagini eventualmente inserite nell'articolo, descrivendone il contenuto in modo pertinente alla parola chiave principale.
            • -
            • Usare meta tag per fornire informazioni aggiuntive sull'articolo, come il titolo, la descrizione, le parole chiave, l'autore e la lingua.
            • -
            - -

            Seguendo questi consigli si può migliorare la visibilità e la leggibilità del titolo e dell'articolo sui motori di ricerca e sui dispositivi mobili. Per verificare l'efficacia del SEO e del formato HTML si possono usare strumenti online come Google Search Console, Google PageSpeed Insights o W

            -

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/niizam/sovits-models/hubert/hubert_model_onnx.py b/spaces/niizam/sovits-models/hubert/hubert_model_onnx.py deleted file mode 100644 index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000 --- a/spaces/niizam/sovits-models/hubert/hubert_model_onnx.py +++ /dev/null @@ -1,217 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - def forward(self, x): - return self.units(x) - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/mask_rcnn_vitdet_b_100ep.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/mask_rcnn_vitdet_b_100ep.py deleted file mode 100644 index 8fd36e92da0137df8aae5935e71b7af419ac1016..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/mask_rcnn_vitdet_b_100ep.py +++ /dev/null @@ -1,40 +0,0 @@ -from functools import partial -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2 import model_zoo -from detectron2.config import LazyCall as L -from detectron2.solver import WarmupParamScheduler -from detectron2.modeling.backbone.vit import get_vit_lr_decay_rate - -from ..common.coco_loader_lsj import dataloader - - -model = model_zoo.get_config("common/models/mask_rcnn_vitdet.py").model - -# Initialization and trainer settings -train = model_zoo.get_config("common/train.py").train -train.amp.enabled = True -train.ddp.fp16_compression = True -train.init_checkpoint = ( - "detectron2://ImageNetPretrained/MAE/mae_pretrain_vit_base.pth?matching_heuristics=True" -) - - -# Schedule -# 100 ep = 184375 iters * 64 images/iter / 118000 images/ep -train.max_iter = 184375 - -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01], - milestones=[163889, 177546], - num_updates=train.max_iter, - ), - warmup_length=250 / train.max_iter, - warmup_factor=0.001, -) - -# Optimizer -optimizer = model_zoo.get_config("common/optim.py").AdamW -optimizer.params.lr_factor_func = partial(get_vit_lr_decay_rate, num_layers=12, lr_decay_rate=0.7) -optimizer.params.overrides = {"pos_embed": {"weight_decay": 0.0}} diff --git a/spaces/nkatraga/7.22.VideoSummary2/summarize.py b/spaces/nkatraga/7.22.VideoSummary2/summarize.py deleted file mode 100644 index 52e42585f66a92dc2e3a99822c4bb420ecf4bd52..0000000000000000000000000000000000000000 --- a/spaces/nkatraga/7.22.VideoSummary2/summarize.py +++ /dev/null @@ -1,43 +0,0 @@ -import traceback -import sys - -from youtube_transcript_api import YouTubeTranscriptApi -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -def Summarizer(link, model): - - video_id = link.split("=")[1] - - try: - transcript = YouTubeTranscriptApi.get_transcript(video_id) - FinalTranscript = ' '.join([i['text'] for i in transcript]) - - if model == "Pegasus": - checkpoint = "google/pegasus-large" - elif model == "mT5": - checkpoint = "csebuetnlp/mT5_multilingual_XLSum" - elif model == "BART": - checkpoint = "sshleifer/distilbart-cnn-12-6" - - tokenizer = AutoTokenizer.from_pretrained(checkpoint) - model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - - - inputs = tokenizer(FinalTranscript, - max_length=1024, - truncation=True, - return_tensors="pt") - - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - - - return summary[0] - - - except Exception: - print(traceback.format_exc()) - # or - print(sys.exc_info()[2]) diff --git a/spaces/oliver2023/mm-react/README.md b/spaces/oliver2023/mm-react/README.md deleted file mode 100644 index f070b45f039a1fd6cfa9a04095b7a5cd2befff93..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/mm-react/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: mm-react -emoji: 💻 -colorFrom: indigo -colorTo: pink -sdk: docker -pinned: false -license: other -duplicated_from: microsoft-cognitive-service/mm-react ---- - -

            Additional Details

            -

            - MM-ReAct Website - · - MM-ReAct Paper - · - MM-ReAct Code -

            - -* If you modify the code you can build "langchain-0.0.94-py3-none-any.whl" from [this folder](https://github.com/microsoft/MM-REACT/tree/main/langchain) using "poetry build" -* [List of environment Variables](https://github.com/microsoft/MM-REACT#here-are-the-list-of-resources-you-need-to-set-up-in-azure-and-their-environment-variables) you need to set as SECRET in huggingface space. diff --git a/spaces/omb23/pettrainingmodel/app.py b/spaces/omb23/pettrainingmodel/app.py deleted file mode 100644 index aa7618837a58e83a7f8daa37dd1769a69f8853f6..0000000000000000000000000000000000000000 --- a/spaces/omb23/pettrainingmodel/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - -learn = load_learner('export (1).pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Pet Breed Classifier" -description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo for Gradio and HuggingFace Spaces." -article="

            Blog post

            " -examples = ['siamese.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,article=article,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() - diff --git a/spaces/openskyml/pigeon-chat/README.md b/spaces/openskyml/pigeon-chat/README.md deleted file mode 100644 index ad9d868539efbc1168894eb7b7f294f5baed8cd1..0000000000000000000000000000000000000000 --- a/spaces/openskyml/pigeon-chat/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Pigeon-Chat -emoji: 🕊 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: true -suggested_hardware: t4-small ---- - -

            🕊 PigeonChat

            - -🚀 This space runs very fast even on CPU. - -🌍 PigeonChat is available worldwide in over 160 languages. - -🔐 PigeonChat is powered by open source and is completely private. - -🎠 You get totally unique and creative answers. - -👥️️ Developed by [OpenSkyML](https://huggingface.co/openskyml). \ No newline at end of file diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_n\305\221v\303\251r.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_n\305\221v\303\251r.html" deleted file mode 100644 index 2ce9cb5daf0cb1b44bf9d34af0c91b9e3ba2d9d4..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/counterfactual_n\305\221v\303\251r.html" +++ /dev/null @@ -1,23 +0,0 @@ -
            0th instance:
            - -
            -
            -
            - -
            -
            - Source Saliency Heatmap -
            - x: Generated tokens, y: Attributed tokens -
            - - - -
            ▁He's → ▁She's▁a▁nurse.</s>
            ▁Ő-0.087-0.039-0.059-0.139
            ▁nővér.0.0550.0330.011-0.009
            </s>0.00.00.00.0
            probability0.560.005-0.0-0.001
            -
            - -
            -
            -
            - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/cm_stochastic_iterative.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/cm_stochastic_iterative.md deleted file mode 100644 index a1d5f64036e6b1320e7d7bf7de8c96877825903b..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/cm_stochastic_iterative.md +++ /dev/null @@ -1,15 +0,0 @@ -# CMStochasticIterativeScheduler - -[Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. - -The abstract from the paper is: - -*Diffusion models have made significant breakthroughs in image, audio, and video generation, but they depend on an iterative generation process that causes slow sampling speed and caps their potential for real-time applications. To overcome this limitation, we propose consistency models, a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either as a way to distill pre-trained diffusion models, or as standalone generative models. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step generation. For example, we achieve the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained as standalone generative models, consistency models also outperform single-step, non-adversarial generative models on standard benchmarks like CIFAR-10, ImageNet 64x64 and LSUN 256x256.* - -The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models). - -## CMStochasticIterativeScheduler -[[autodoc]] CMStochasticIterativeScheduler - -## CMStochasticIterativeSchedulerOutput -[[autodoc]] schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/installation.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/installation.md deleted file mode 100644 index 1a0951bf7bbaf942e053cfe7f5ebf851691ae3f6..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/installation.md +++ /dev/null @@ -1,146 +0,0 @@ - - -# Installation - -Install 🤗 Diffusers for whichever deep learning library you're working with. - -🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+ and Flax. Follow the installation instructions below for the deep learning library you are using: - -- [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. -- [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. - -## Install with pip - -You should install 🤗 Diffusers in a [virtual environment](https://docs.python.org/3/library/venv.html). -If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). -A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. - -Start by creating a virtual environment in your project directory: - -```bash -python -m venv .env -``` - -Activate the virtual environment: - -```bash -source .env/bin/activate -``` - -🤗 Diffusers also relies on the 🤗 Transformers library, and you can install both with the following command: - - - -```bash -pip install diffusers["torch"] transformers -``` - - -```bash -pip install diffusers["flax"] transformers -``` - - - -## Install from source - -Before installing 🤗 Diffusers from source, make sure you have `torch` and 🤗 Accelerate installed. - -For `torch` installation, refer to the `torch` [installation](https://pytorch.org/get-started/locally/#start-locally) guide. - -To install 🤗 Accelerate: - -```bash -pip install accelerate -``` - -Install 🤗 Diffusers from source with the following command: - -```bash -pip install git+https://github.com/huggingface/diffusers -``` - -This command installs the bleeding edge `main` version rather than the latest `stable` version. -The `main` version is useful for staying up-to-date with the latest developments. -For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. -However, this means the `main` version may not always be stable. -We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. -If you run into a problem, please open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose), so we can fix it even sooner! - -## Editable install - -You will need an editable install if you'd like to: - -* Use the `main` version of the source code. -* Contribute to 🤗 Diffusers and need to test changes in the code. - -Clone the repository and install 🤗 Diffusers with the following commands: - -```bash -git clone https://github.com/huggingface/diffusers.git -cd diffusers -``` - - - -```bash -pip install -e ".[torch]" -``` - - -```bash -pip install -e ".[flax]" -``` - - - -These commands will link the folder you cloned the repository to and your Python library paths. -Python will now look inside the folder you cloned to in addition to the normal library paths. -For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.8/site-packages/`, Python will also search the `~/diffusers/` folder you cloned to. - - - -You must keep the `diffusers` folder if you want to keep using the library. - - - -Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: - -```bash -cd ~/diffusers/ -git pull -``` - -Your Python environment will find the `main` version of 🤗 Diffusers on the next run. - -## Notice on telemetry logging - -Our library gathers telemetry information during `from_pretrained()` requests. -This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class, -and the path to a pre-trained checkpoint if it is hosted on the Hub. -This usage data helps us debug issues and prioritize new features. -Telemetry is only sent when loading models and pipelines from the HuggingFace Hub, -and is not collected during local usage. - -We understand that not everyone wants to share additional information, and we respect your privacy, -so you can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal: - -On Linux/MacOS: -```bash -export DISABLE_TELEMETRY=YES -``` - -On Windows: -```bash -set DISABLE_TELEMETRY=YES -``` diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_2d_condition.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_2d_condition.py deleted file mode 100644 index 385f0a42c5986b59d5a6510c977a1a4790cc0249..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_2d_condition.py +++ /dev/null @@ -1,1045 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Any, Dict, List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.utils.checkpoint - -from ..configuration_utils import ConfigMixin, register_to_config -from ..loaders import UNet2DConditionLoadersMixin -from ..utils import BaseOutput, logging -from .activations import get_activation -from .attention_processor import ( - ADDED_KV_ATTENTION_PROCESSORS, - CROSS_ATTENTION_PROCESSORS, - AttentionProcessor, - AttnAddedKVProcessor, - AttnProcessor, -) -from .embeddings import ( - GaussianFourierProjection, - ImageHintTimeEmbedding, - ImageProjection, - ImageTimeEmbedding, - PositionNet, - TextImageProjection, - TextImageTimeEmbedding, - TextTimeEmbedding, - TimestepEmbedding, - Timesteps, -) -from .modeling_utils import ModelMixin -from .unet_2d_blocks import ( - UNetMidBlock2DCrossAttn, - UNetMidBlock2DSimpleCrossAttn, - get_down_block, - get_up_block, -) - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -class UNet2DConditionOutput(BaseOutput): - """ - The output of [`UNet2DConditionModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model. - """ - - sample: torch.FloatTensor = None - - -class UNet2DConditionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin): - r""" - A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample - shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented - for all models (such as downloading or saving). - - Parameters: - sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): - Height and width of input/output sample. - in_channels (`int`, *optional*, defaults to 4): Number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): Number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): - The tuple of downsample blocks to use. - mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock2DCrossAttn"`): - Block type for middle of UNet, it can be either `UNetMidBlock2DCrossAttn` or - `UNetMidBlock2DSimpleCrossAttn`. If `None`, the mid block layer is skipped. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")`): - The tuple of upsample blocks to use. - only_cross_attention(`bool` or `Tuple[bool]`, *optional*, default to `False`): - Whether to include self-attention in the basic transformer blocks, see - [`~models.attention.BasicTransformerBlock`]. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - If `None`, normalization and activation layers is skipped in post-processing. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int` or `Tuple[int]`, *optional*, defaults to 1280): - The dimension of the cross attention features. - transformer_layers_per_block (`int` or `Tuple[int]`, *optional*, defaults to 1): - The number of transformer blocks of type [`~models.attention.BasicTransformerBlock`]. Only relevant for - [`~models.unet_2d_blocks.CrossAttnDownBlock2D`], [`~models.unet_2d_blocks.CrossAttnUpBlock2D`], - [`~models.unet_2d_blocks.UNetMidBlock2DCrossAttn`]. - encoder_hid_dim (`int`, *optional*, defaults to None): - If `encoder_hid_dim_type` is defined, `encoder_hidden_states` will be projected from `encoder_hid_dim` - dimension to `cross_attention_dim`. - encoder_hid_dim_type (`str`, *optional*, defaults to `None`): - If given, the `encoder_hidden_states` and potentially other embeddings are down-projected to text - embeddings of dimension `cross_attention` according to `encoder_hid_dim_type`. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - num_attention_heads (`int`, *optional*): - The number of attention heads. If not defined, defaults to `attention_head_dim` - resnet_time_scale_shift (`str`, *optional*, defaults to `"default"`): Time scale shift config - for ResNet blocks (see [`~models.resnet.ResnetBlock2D`]). Choose from `default` or `scale_shift`. - class_embed_type (`str`, *optional*, defaults to `None`): - The type of class embedding to use which is ultimately summed with the time embeddings. Choose from `None`, - `"timestep"`, `"identity"`, `"projection"`, or `"simple_projection"`. - addition_embed_type (`str`, *optional*, defaults to `None`): - Configures an optional embedding which will be summed with the time embeddings. Choose from `None` or - "text". "text" will use the `TextTimeEmbedding` layer. - addition_time_embed_dim: (`int`, *optional*, defaults to `None`): - Dimension for the timestep embeddings. - num_class_embeds (`int`, *optional*, defaults to `None`): - Input dimension of the learnable embedding matrix to be projected to `time_embed_dim`, when performing - class conditioning with `class_embed_type` equal to `None`. - time_embedding_type (`str`, *optional*, defaults to `positional`): - The type of position embedding to use for timesteps. Choose from `positional` or `fourier`. - time_embedding_dim (`int`, *optional*, defaults to `None`): - An optional override for the dimension of the projected time embedding. - time_embedding_act_fn (`str`, *optional*, defaults to `None`): - Optional activation function to use only once on the time embeddings before they are passed to the rest of - the UNet. Choose from `silu`, `mish`, `gelu`, and `swish`. - timestep_post_act (`str`, *optional*, defaults to `None`): - The second activation function to use in timestep embedding. Choose from `silu`, `mish` and `gelu`. - time_cond_proj_dim (`int`, *optional*, defaults to `None`): - The dimension of `cond_proj` layer in the timestep embedding. - conv_in_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_in` layer. - conv_out_kernel (`int`, *optional*, default to `3`): The kernel size of `conv_out` layer. - projection_class_embeddings_input_dim (`int`, *optional*): The dimension of the `class_labels` input when - `class_embed_type="projection"`. Required when `class_embed_type="projection"`. - class_embeddings_concat (`bool`, *optional*, defaults to `False`): Whether to concatenate the time - embeddings with the class embeddings. - mid_block_only_cross_attention (`bool`, *optional*, defaults to `None`): - Whether to use cross attention with the mid block when using the `UNetMidBlock2DSimpleCrossAttn`. If - `only_cross_attention` is given as a single boolean and `mid_block_only_cross_attention` is `None`, the - `only_cross_attention` value is used as the value for `mid_block_only_cross_attention`. Default to `False` - otherwise. - """ - - _supports_gradient_checkpointing = True - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "DownBlock2D", - ), - mid_block_type: Optional[str] = "UNetMidBlock2DCrossAttn", - up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"), - only_cross_attention: Union[bool, Tuple[bool]] = False, - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: Union[int, Tuple[int]] = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - dropout: float = 0.0, - act_fn: str = "silu", - norm_num_groups: Optional[int] = 32, - norm_eps: float = 1e-5, - cross_attention_dim: Union[int, Tuple[int]] = 1280, - transformer_layers_per_block: Union[int, Tuple[int]] = 1, - encoder_hid_dim: Optional[int] = None, - encoder_hid_dim_type: Optional[str] = None, - attention_head_dim: Union[int, Tuple[int]] = 8, - num_attention_heads: Optional[Union[int, Tuple[int]]] = None, - dual_cross_attention: bool = False, - use_linear_projection: bool = False, - class_embed_type: Optional[str] = None, - addition_embed_type: Optional[str] = None, - addition_time_embed_dim: Optional[int] = None, - num_class_embeds: Optional[int] = None, - upcast_attention: bool = False, - resnet_time_scale_shift: str = "default", - resnet_skip_time_act: bool = False, - resnet_out_scale_factor: int = 1.0, - time_embedding_type: str = "positional", - time_embedding_dim: Optional[int] = None, - time_embedding_act_fn: Optional[str] = None, - timestep_post_act: Optional[str] = None, - time_cond_proj_dim: Optional[int] = None, - conv_in_kernel: int = 3, - conv_out_kernel: int = 3, - projection_class_embeddings_input_dim: Optional[int] = None, - attention_type: str = "default", - class_embeddings_concat: bool = False, - mid_block_only_cross_attention: Optional[bool] = None, - cross_attention_norm: Optional[str] = None, - addition_embed_type_num_heads=64, - ): - super().__init__() - - self.sample_size = sample_size - - if num_attention_heads is not None: - raise ValueError( - "At the moment it is not possible to define the number of attention heads via `num_attention_heads` because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing `num_attention_heads` will only be supported in diffusers v0.19." - ) - - # If `num_attention_heads` is not defined (which is the case for most models) - # it will default to `attention_head_dim`. This looks weird upon first reading it and it is. - # The reason for this behavior is to correct for incorrectly named variables that were introduced - # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 - # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking - # which is why we correct for the naming here. - num_attention_heads = num_attention_heads or attention_head_dim - - # Check inputs - if len(down_block_types) != len(up_block_types): - raise ValueError( - f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}." - ) - - if len(block_out_channels) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(only_cross_attention, bool) and len(only_cross_attention) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `only_cross_attention` as `down_block_types`. `only_cross_attention`: {only_cross_attention}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`: {num_attention_heads}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(attention_head_dim, int) and len(attention_head_dim) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `attention_head_dim` as `down_block_types`. `attention_head_dim`: {attention_head_dim}. `down_block_types`: {down_block_types}." - ) - - if isinstance(cross_attention_dim, list) and len(cross_attention_dim) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `cross_attention_dim` as `down_block_types`. `cross_attention_dim`: {cross_attention_dim}. `down_block_types`: {down_block_types}." - ) - - if not isinstance(layers_per_block, int) and len(layers_per_block) != len(down_block_types): - raise ValueError( - f"Must provide the same number of `layers_per_block` as `down_block_types`. `layers_per_block`: {layers_per_block}. `down_block_types`: {down_block_types}." - ) - - # input - conv_in_padding = (conv_in_kernel - 1) // 2 - self.conv_in = nn.Conv2d( - in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding - ) - - # time - if time_embedding_type == "fourier": - time_embed_dim = time_embedding_dim or block_out_channels[0] * 2 - if time_embed_dim % 2 != 0: - raise ValueError(f"`time_embed_dim` should be divisible by 2, but is {time_embed_dim}.") - self.time_proj = GaussianFourierProjection( - time_embed_dim // 2, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos - ) - timestep_input_dim = time_embed_dim - elif time_embedding_type == "positional": - time_embed_dim = time_embedding_dim or block_out_channels[0] * 4 - - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - else: - raise ValueError( - f"{time_embedding_type} does not exist. Please make sure to use one of `fourier` or `positional`." - ) - - self.time_embedding = TimestepEmbedding( - timestep_input_dim, - time_embed_dim, - act_fn=act_fn, - post_act_fn=timestep_post_act, - cond_proj_dim=time_cond_proj_dim, - ) - - if encoder_hid_dim_type is None and encoder_hid_dim is not None: - encoder_hid_dim_type = "text_proj" - self.register_to_config(encoder_hid_dim_type=encoder_hid_dim_type) - logger.info("encoder_hid_dim_type defaults to 'text_proj' as `encoder_hid_dim` is defined.") - - if encoder_hid_dim is None and encoder_hid_dim_type is not None: - raise ValueError( - f"`encoder_hid_dim` has to be defined when `encoder_hid_dim_type` is set to {encoder_hid_dim_type}." - ) - - if encoder_hid_dim_type == "text_proj": - self.encoder_hid_proj = nn.Linear(encoder_hid_dim, cross_attention_dim) - elif encoder_hid_dim_type == "text_image_proj": - # image_embed_dim DOESN'T have to be `cross_attention_dim`. To not clutter the __init__ too much - # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use - # case when `addition_embed_type == "text_image_proj"` (Kadinsky 2.1)` - self.encoder_hid_proj = TextImageProjection( - text_embed_dim=encoder_hid_dim, - image_embed_dim=cross_attention_dim, - cross_attention_dim=cross_attention_dim, - ) - elif encoder_hid_dim_type == "image_proj": - # Kandinsky 2.2 - self.encoder_hid_proj = ImageProjection( - image_embed_dim=encoder_hid_dim, - cross_attention_dim=cross_attention_dim, - ) - elif encoder_hid_dim_type is not None: - raise ValueError( - f"encoder_hid_dim_type: {encoder_hid_dim_type} must be None, 'text_proj' or 'text_image_proj'." - ) - else: - self.encoder_hid_proj = None - - # class embedding - if class_embed_type is None and num_class_embeds is not None: - self.class_embedding = nn.Embedding(num_class_embeds, time_embed_dim) - elif class_embed_type == "timestep": - self.class_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim, act_fn=act_fn) - elif class_embed_type == "identity": - self.class_embedding = nn.Identity(time_embed_dim, time_embed_dim) - elif class_embed_type == "projection": - if projection_class_embeddings_input_dim is None: - raise ValueError( - "`class_embed_type`: 'projection' requires `projection_class_embeddings_input_dim` be set" - ) - # The projection `class_embed_type` is the same as the timestep `class_embed_type` except - # 1. the `class_labels` inputs are not first converted to sinusoidal embeddings - # 2. it projects from an arbitrary input dimension. - # - # Note that `TimestepEmbedding` is quite general, being mainly linear layers and activations. - # When used for embedding actual timesteps, the timesteps are first converted to sinusoidal embeddings. - # As a result, `TimestepEmbedding` can be passed arbitrary vectors. - self.class_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) - elif class_embed_type == "simple_projection": - if projection_class_embeddings_input_dim is None: - raise ValueError( - "`class_embed_type`: 'simple_projection' requires `projection_class_embeddings_input_dim` be set" - ) - self.class_embedding = nn.Linear(projection_class_embeddings_input_dim, time_embed_dim) - else: - self.class_embedding = None - - if addition_embed_type == "text": - if encoder_hid_dim is not None: - text_time_embedding_from_dim = encoder_hid_dim - else: - text_time_embedding_from_dim = cross_attention_dim - - self.add_embedding = TextTimeEmbedding( - text_time_embedding_from_dim, time_embed_dim, num_heads=addition_embed_type_num_heads - ) - elif addition_embed_type == "text_image": - # text_embed_dim and image_embed_dim DON'T have to be `cross_attention_dim`. To not clutter the __init__ too much - # they are set to `cross_attention_dim` here as this is exactly the required dimension for the currently only use - # case when `addition_embed_type == "text_image"` (Kadinsky 2.1)` - self.add_embedding = TextImageTimeEmbedding( - text_embed_dim=cross_attention_dim, image_embed_dim=cross_attention_dim, time_embed_dim=time_embed_dim - ) - elif addition_embed_type == "text_time": - self.add_time_proj = Timesteps(addition_time_embed_dim, flip_sin_to_cos, freq_shift) - self.add_embedding = TimestepEmbedding(projection_class_embeddings_input_dim, time_embed_dim) - elif addition_embed_type == "image": - # Kandinsky 2.2 - self.add_embedding = ImageTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim) - elif addition_embed_type == "image_hint": - # Kandinsky 2.2 ControlNet - self.add_embedding = ImageHintTimeEmbedding(image_embed_dim=encoder_hid_dim, time_embed_dim=time_embed_dim) - elif addition_embed_type is not None: - raise ValueError(f"addition_embed_type: {addition_embed_type} must be None, 'text' or 'text_image'.") - - if time_embedding_act_fn is None: - self.time_embed_act = None - else: - self.time_embed_act = get_activation(time_embedding_act_fn) - - self.down_blocks = nn.ModuleList([]) - self.up_blocks = nn.ModuleList([]) - - if isinstance(only_cross_attention, bool): - if mid_block_only_cross_attention is None: - mid_block_only_cross_attention = only_cross_attention - - only_cross_attention = [only_cross_attention] * len(down_block_types) - - if mid_block_only_cross_attention is None: - mid_block_only_cross_attention = False - - if isinstance(num_attention_heads, int): - num_attention_heads = (num_attention_heads,) * len(down_block_types) - - if isinstance(attention_head_dim, int): - attention_head_dim = (attention_head_dim,) * len(down_block_types) - - if isinstance(cross_attention_dim, int): - cross_attention_dim = (cross_attention_dim,) * len(down_block_types) - - if isinstance(layers_per_block, int): - layers_per_block = [layers_per_block] * len(down_block_types) - - if isinstance(transformer_layers_per_block, int): - transformer_layers_per_block = [transformer_layers_per_block] * len(down_block_types) - - if class_embeddings_concat: - # The time embeddings are concatenated with the class embeddings. The dimension of the - # time embeddings passed to the down, middle, and up blocks is twice the dimension of the - # regular time embeddings - blocks_time_embed_dim = time_embed_dim * 2 - else: - blocks_time_embed_dim = time_embed_dim - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block[i], - transformer_layers_per_block=transformer_layers_per_block[i], - in_channels=input_channel, - out_channels=output_channel, - temb_channels=blocks_time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=cross_attention_dim[i], - num_attention_heads=num_attention_heads[i], - downsample_padding=downsample_padding, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - attention_type=attention_type, - resnet_skip_time_act=resnet_skip_time_act, - resnet_out_scale_factor=resnet_out_scale_factor, - cross_attention_norm=cross_attention_norm, - attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel, - dropout=dropout, - ) - self.down_blocks.append(down_block) - - # mid - if mid_block_type == "UNetMidBlock2DCrossAttn": - self.mid_block = UNetMidBlock2DCrossAttn( - transformer_layers_per_block=transformer_layers_per_block[-1], - in_channels=block_out_channels[-1], - temb_channels=blocks_time_embed_dim, - dropout=dropout, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift=resnet_time_scale_shift, - cross_attention_dim=cross_attention_dim[-1], - num_attention_heads=num_attention_heads[-1], - resnet_groups=norm_num_groups, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - attention_type=attention_type, - ) - elif mid_block_type == "UNetMidBlock2DSimpleCrossAttn": - self.mid_block = UNetMidBlock2DSimpleCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=blocks_time_embed_dim, - dropout=dropout, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - cross_attention_dim=cross_attention_dim[-1], - attention_head_dim=attention_head_dim[-1], - resnet_groups=norm_num_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - skip_time_act=resnet_skip_time_act, - only_cross_attention=mid_block_only_cross_attention, - cross_attention_norm=cross_attention_norm, - ) - elif mid_block_type is None: - self.mid_block = None - else: - raise ValueError(f"unknown mid_block_type : {mid_block_type}") - - # count how many layers upsample the images - self.num_upsamplers = 0 - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - reversed_num_attention_heads = list(reversed(num_attention_heads)) - reversed_layers_per_block = list(reversed(layers_per_block)) - reversed_cross_attention_dim = list(reversed(cross_attention_dim)) - reversed_transformer_layers_per_block = list(reversed(transformer_layers_per_block)) - only_cross_attention = list(reversed(only_cross_attention)) - - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - is_final_block = i == len(block_out_channels) - 1 - - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - # add upsample block for all BUT final layer - if not is_final_block: - add_upsample = True - self.num_upsamplers += 1 - else: - add_upsample = False - - up_block = get_up_block( - up_block_type, - num_layers=reversed_layers_per_block[i] + 1, - transformer_layers_per_block=reversed_transformer_layers_per_block[i], - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=blocks_time_embed_dim, - add_upsample=add_upsample, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - cross_attention_dim=reversed_cross_attention_dim[i], - num_attention_heads=reversed_num_attention_heads[i], - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention[i], - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - attention_type=attention_type, - resnet_skip_time_act=resnet_skip_time_act, - resnet_out_scale_factor=resnet_out_scale_factor, - cross_attention_norm=cross_attention_norm, - attention_head_dim=attention_head_dim[i] if attention_head_dim[i] is not None else output_channel, - dropout=dropout, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - if norm_num_groups is not None: - self.conv_norm_out = nn.GroupNorm( - num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps - ) - - self.conv_act = get_activation(act_fn) - - else: - self.conv_norm_out = None - self.conv_act = None - - conv_out_padding = (conv_out_kernel - 1) // 2 - self.conv_out = nn.Conv2d( - block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding - ) - - if attention_type in ["gated", "gated-text-image"]: - positive_len = 768 - if isinstance(cross_attention_dim, int): - positive_len = cross_attention_dim - elif isinstance(cross_attention_dim, tuple) or isinstance(cross_attention_dim, list): - positive_len = cross_attention_dim[0] - - feature_type = "text-only" if attention_type == "gated" else "text-image" - self.position_net = PositionNet( - positive_len=positive_len, out_dim=cross_attention_dim, feature_type=feature_type - ) - - @property - def attn_processors(self) -> Dict[str, AttentionProcessor]: - r""" - Returns: - `dict` of attention processors: A dictionary containing all attention processors used in the model with - indexed by its weight name. - """ - # set recursively - processors = {} - - def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): - if hasattr(module, "get_processor"): - processors[f"{name}.processor"] = module.get_processor(return_deprecated_lora=True) - - for sub_name, child in module.named_children(): - fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) - - return processors - - for name, module in self.named_children(): - fn_recursive_add_processors(name, module, processors) - - return processors - - def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): - r""" - Sets the attention processor to use to compute attention. - - Parameters: - processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): - The instantiated processor class or a dictionary of processor classes that will be set as the processor - for **all** `Attention` layers. - - If `processor` is a dict, the key needs to define the path to the corresponding cross attention - processor. This is strongly recommended when setting trainable attention processors. - - """ - count = len(self.attn_processors.keys()) - - if isinstance(processor, dict) and len(processor) != count: - raise ValueError( - f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" - f" number of attention layers: {count}. Please make sure to pass {count} processor classes." - ) - - def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): - if hasattr(module, "set_processor"): - if not isinstance(processor, dict): - module.set_processor(processor) - else: - module.set_processor(processor.pop(f"{name}.processor")) - - for sub_name, child in module.named_children(): - fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) - - for name, module in self.named_children(): - fn_recursive_attn_processor(name, module, processor) - - def set_default_attn_processor(self): - """ - Disables custom attention processors and sets the default attention implementation. - """ - if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): - processor = AttnAddedKVProcessor() - elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): - processor = AttnProcessor() - else: - raise ValueError( - f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" - ) - - self.set_attn_processor(processor) - - def set_attention_slice(self, slice_size): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module splits the input tensor in slices to compute attention in - several steps. This is useful for saving some memory in exchange for a small decrease in speed. - - Args: - slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`): - When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If - `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is - provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` - must be a multiple of `slice_size`. - """ - sliceable_head_dims = [] - - def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module): - if hasattr(module, "set_attention_slice"): - sliceable_head_dims.append(module.sliceable_head_dim) - - for child in module.children(): - fn_recursive_retrieve_sliceable_dims(child) - - # retrieve number of attention layers - for module in self.children(): - fn_recursive_retrieve_sliceable_dims(module) - - num_sliceable_layers = len(sliceable_head_dims) - - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = [dim // 2 for dim in sliceable_head_dims] - elif slice_size == "max": - # make smallest slice possible - slice_size = num_sliceable_layers * [1] - - slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size - - if len(slice_size) != len(sliceable_head_dims): - raise ValueError( - f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different" - f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}." - ) - - for i in range(len(slice_size)): - size = slice_size[i] - dim = sliceable_head_dims[i] - if size is not None and size > dim: - raise ValueError(f"size {size} has to be smaller or equal to {dim}.") - - # Recursively walk through all the children. - # Any children which exposes the set_attention_slice method - # gets the message - def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]): - if hasattr(module, "set_attention_slice"): - module.set_attention_slice(slice_size.pop()) - - for child in module.children(): - fn_recursive_set_attention_slice(child, slice_size) - - reversed_slice_size = list(reversed(slice_size)) - for module in self.children(): - fn_recursive_set_attention_slice(module, reversed_slice_size) - - def _set_gradient_checkpointing(self, module, value=False): - if hasattr(module, "gradient_checkpointing"): - module.gradient_checkpointing = value - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - class_labels: Optional[torch.Tensor] = None, - timestep_cond: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None, - down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None, - mid_block_additional_residual: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - r""" - The [`UNet2DConditionModel`] forward method. - - Args: - sample (`torch.FloatTensor`): - The noisy input tensor with the following shape `(batch, channel, height, width)`. - timestep (`torch.FloatTensor` or `float` or `int`): The number of timesteps to denoise an input. - encoder_hidden_states (`torch.FloatTensor`): - The encoder hidden states with shape `(batch, sequence_length, feature_dim)`. - encoder_attention_mask (`torch.Tensor`): - A cross-attention mask of shape `(batch, sequence_length)` is applied to `encoder_hidden_states`. If - `True` the mask is kept, otherwise if `False` it is discarded. Mask will be converted into a bias, - which adds large negative values to the attention scores corresponding to "discard" tokens. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain - tuple. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttnProcessor`]. - added_cond_kwargs: (`dict`, *optional*): - A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that - are passed along to the UNet blocks. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - If `return_dict` is True, an [`~models.unet_2d_condition.UNet2DConditionOutput`] is returned, otherwise - a `tuple` is returned where the first element is the sample tensor. - """ - # By default samples have to be AT least a multiple of the overall upsampling factor. - # The overall upsampling factor is equal to 2 ** (# num of upsampling layers). - # However, the upsampling interpolation output size can be forced to fit any upsampling size - # on the fly if necessary. - default_overall_up_factor = 2**self.num_upsamplers - - # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` - forward_upsample_size = False - upsample_size = None - - if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): - # Forward upsample size to force interpolation output size. - forward_upsample_size = True - - # ensure attention_mask is a bias, and give it a singleton query_tokens dimension - # expects mask of shape: - # [batch, key_tokens] - # adds singleton query_tokens dimension: - # [batch, 1, key_tokens] - # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: - # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) - # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) - if attention_mask is not None: - # assume that mask is expressed as: - # (1 = keep, 0 = discard) - # convert mask into a bias that can be added to attention scores: - # (keep = +0, discard = -10000.0) - attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # convert encoder_attention_mask to a bias the same way we do for attention_mask - if encoder_attention_mask is not None: - encoder_attention_mask = (1 - encoder_attention_mask.to(sample.dtype)) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can - # This would be a good case for the `match` statement (Python 3.10+) - is_mps = sample.device.type == "mps" - if isinstance(timestep, float): - dtype = torch.float32 if is_mps else torch.float64 - else: - dtype = torch.int32 if is_mps else torch.int64 - timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) - elif len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - - # `Timesteps` does not contain any weights and will always return f32 tensors - # but time_embedding might actually be running in fp16. so we need to cast here. - # there might be better ways to encapsulate this. - t_emb = t_emb.to(dtype=sample.dtype) - - emb = self.time_embedding(t_emb, timestep_cond) - aug_emb = None - - if self.class_embedding is not None: - if class_labels is None: - raise ValueError("class_labels should be provided when num_class_embeds > 0") - - if self.config.class_embed_type == "timestep": - class_labels = self.time_proj(class_labels) - - # `Timesteps` does not contain any weights and will always return f32 tensors - # there might be better ways to encapsulate this. - class_labels = class_labels.to(dtype=sample.dtype) - - class_emb = self.class_embedding(class_labels).to(dtype=sample.dtype) - - if self.config.class_embeddings_concat: - emb = torch.cat([emb, class_emb], dim=-1) - else: - emb = emb + class_emb - - if self.config.addition_embed_type == "text": - aug_emb = self.add_embedding(encoder_hidden_states) - elif self.config.addition_embed_type == "text_image": - # Kandinsky 2.1 - style - if "image_embeds" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `addition_embed_type` set to 'text_image' which requires the keyword argument `image_embeds` to be passed in `added_cond_kwargs`" - ) - - image_embs = added_cond_kwargs.get("image_embeds") - text_embs = added_cond_kwargs.get("text_embeds", encoder_hidden_states) - aug_emb = self.add_embedding(text_embs, image_embs) - elif self.config.addition_embed_type == "text_time": - # SDXL - style - if "text_embeds" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `text_embeds` to be passed in `added_cond_kwargs`" - ) - text_embeds = added_cond_kwargs.get("text_embeds") - if "time_ids" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `addition_embed_type` set to 'text_time' which requires the keyword argument `time_ids` to be passed in `added_cond_kwargs`" - ) - time_ids = added_cond_kwargs.get("time_ids") - time_embeds = self.add_time_proj(time_ids.flatten()) - time_embeds = time_embeds.reshape((text_embeds.shape[0], -1)) - add_embeds = torch.concat([text_embeds, time_embeds], dim=-1) - add_embeds = add_embeds.to(emb.dtype) - aug_emb = self.add_embedding(add_embeds) - elif self.config.addition_embed_type == "image": - # Kandinsky 2.2 - style - if "image_embeds" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `addition_embed_type` set to 'image' which requires the keyword argument `image_embeds` to be passed in `added_cond_kwargs`" - ) - image_embs = added_cond_kwargs.get("image_embeds") - aug_emb = self.add_embedding(image_embs) - elif self.config.addition_embed_type == "image_hint": - # Kandinsky 2.2 - style - if "image_embeds" not in added_cond_kwargs or "hint" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `addition_embed_type` set to 'image_hint' which requires the keyword arguments `image_embeds` and `hint` to be passed in `added_cond_kwargs`" - ) - image_embs = added_cond_kwargs.get("image_embeds") - hint = added_cond_kwargs.get("hint") - aug_emb, hint = self.add_embedding(image_embs, hint) - sample = torch.cat([sample, hint], dim=1) - - emb = emb + aug_emb if aug_emb is not None else emb - - if self.time_embed_act is not None: - emb = self.time_embed_act(emb) - - if self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_proj": - encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states) - elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "text_image_proj": - # Kadinsky 2.1 - style - if "image_embeds" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'text_image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" - ) - - image_embeds = added_cond_kwargs.get("image_embeds") - encoder_hidden_states = self.encoder_hid_proj(encoder_hidden_states, image_embeds) - elif self.encoder_hid_proj is not None and self.config.encoder_hid_dim_type == "image_proj": - # Kandinsky 2.2 - style - if "image_embeds" not in added_cond_kwargs: - raise ValueError( - f"{self.__class__} has the config param `encoder_hid_dim_type` set to 'image_proj' which requires the keyword argument `image_embeds` to be passed in `added_conditions`" - ) - image_embeds = added_cond_kwargs.get("image_embeds") - encoder_hidden_states = self.encoder_hid_proj(image_embeds) - # 2. pre-process - sample = self.conv_in(sample) - - # 2.5 GLIGEN position net - if cross_attention_kwargs is not None and cross_attention_kwargs.get("gligen", None) is not None: - cross_attention_kwargs = cross_attention_kwargs.copy() - gligen_args = cross_attention_kwargs.pop("gligen") - cross_attention_kwargs["gligen"] = {"objs": self.position_net(**gligen_args)} - - # 3. down - lora_scale = cross_attention_kwargs.get("scale", 1.0) if cross_attention_kwargs is not None else 1.0 - - is_controlnet = mid_block_additional_residual is not None and down_block_additional_residuals is not None - is_adapter = mid_block_additional_residual is None and down_block_additional_residuals is not None - - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: - # For t2i-adapter CrossAttnDownBlock2D - additional_residuals = {} - if is_adapter and len(down_block_additional_residuals) > 0: - additional_residuals["additional_residuals"] = down_block_additional_residuals.pop(0) - - sample, res_samples = downsample_block( - hidden_states=sample, - temb=emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - encoder_attention_mask=encoder_attention_mask, - **additional_residuals, - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale) - - if is_adapter and len(down_block_additional_residuals) > 0: - sample += down_block_additional_residuals.pop(0) - - down_block_res_samples += res_samples - - if is_controlnet: - new_down_block_res_samples = () - - for down_block_res_sample, down_block_additional_residual in zip( - down_block_res_samples, down_block_additional_residuals - ): - down_block_res_sample = down_block_res_sample + down_block_additional_residual - new_down_block_res_samples = new_down_block_res_samples + (down_block_res_sample,) - - down_block_res_samples = new_down_block_res_samples - - # 4. mid - if self.mid_block is not None: - sample = self.mid_block( - sample, - emb, - encoder_hidden_states=encoder_hidden_states, - attention_mask=attention_mask, - cross_attention_kwargs=cross_attention_kwargs, - encoder_attention_mask=encoder_attention_mask, - ) - # To support T2I-Adapter-XL - if ( - is_adapter - and len(down_block_additional_residuals) > 0 - and sample.shape == down_block_additional_residuals[0].shape - ): - sample += down_block_additional_residuals.pop(0) - - if is_controlnet: - sample = sample + mid_block_additional_residual - - # 5. up - for i, upsample_block in enumerate(self.up_blocks): - is_final_block = i == len(self.up_blocks) - 1 - - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - # if we have not reached the final block and need to forward the - # upsample size, we do it here - if not is_final_block and forward_upsample_size: - upsample_size = down_block_res_samples[-1].shape[2:] - - if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - cross_attention_kwargs=cross_attention_kwargs, - upsample_size=upsample_size, - attention_mask=attention_mask, - encoder_attention_mask=encoder_attention_mask, - ) - else: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - upsample_size=upsample_size, - scale=lora_scale, - ) - - # 6. post-process - if self.conv_norm_out: - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) diff --git a/spaces/parkyzh/bingo/src/lib/storage.ts b/spaces/parkyzh/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/peter2489/translator/README.md b/spaces/peter2489/translator/README.md deleted file mode 100644 index 4e957dd7d8f157761f56fbecc426e5a9f4f8a43e..0000000000000000000000000000000000000000 --- a/spaces/peter2489/translator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Translator -emoji: 📚 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pikto/prodia/cutter.py b/spaces/pikto/prodia/cutter.py deleted file mode 100644 index 01f42326f3738e6799b9ae0cfb1e496984d25da7..0000000000000000000000000000000000000000 --- a/spaces/pikto/prodia/cutter.py +++ /dev/null @@ -1,98 +0,0 @@ -import PIL -import numpy as np -from PIL import Image, ImageColor, ImageDraw -from PIL.Image import Image as PILImage -from pymatting.alpha.estimate_alpha_cf import estimate_alpha_cf -from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml -from pymatting.util.util import stack_images -from rembg.bg import post_process, naive_cutout, apply_background_color -from scipy.ndimage import binary_erosion - - -def alpha_matting_cutout(img: PILImage, trimap: np.ndarray) -> PILImage: - if img.mode == "RGBA" or img.mode == "CMYK": - img = img.convert("RGB") - - img = np.asarray(img) - - img_normalized = img / 255.0 - trimap_normalized = trimap / 255.0 - - alpha = estimate_alpha_cf(img_normalized, trimap_normalized) - foreground = estimate_foreground_ml(img_normalized, alpha) - cutout = stack_images(foreground, alpha) - - cutout = np.clip(cutout * 255, 0, 255).astype(np.uint8) - return Image.fromarray(cutout) - - -def generate_trimap( - mask: PILImage, - foreground_threshold: int, - background_threshold: int, - erode_structure_size: int, -) -> np.ndarray: - mask = np.asarray(mask) - - is_foreground = mask > foreground_threshold - is_background = mask < background_threshold - - structure = None - if erode_structure_size > 0: - structure = np.ones( - (erode_structure_size, erode_structure_size), dtype=np.uint8 - ) - - is_foreground = binary_erosion(is_foreground, structure=structure) - is_background = binary_erosion(is_background, structure=structure, border_value=1) - - trimap = np.full(mask.shape, dtype=np.uint8, fill_value=128) - trimap[is_foreground] = 255 - trimap[is_background] = 0 - - return trimap - - -def get_background_dominant_color(img: PILImage, mask: PILImage) -> tuple: - negative_img = img.copy() - negative_mask = PIL.ImageOps.invert(mask) - negative_img.putalpha(negative_mask) - negative_img = negative_img.resize((1, 1)) - r, g, b, a = negative_img.getpixel((0, 0)) - return r, g, b, 255 - - -def remove(session, img: PILImage, smoot: bool, matting: tuple, color) -> (PILImage, PILImage): - mask = session.predict(img)[0] - - if smoot: - mask = PIL.Image.fromarray(post_process(np.array(mask))) - - fg_t, bg_t, erode = matting - - if fg_t > 0 or bg_t > 0 or erode > 0: - mask = generate_trimap(mask, *matting) - try: - cutout = alpha_matting_cutout(img, mask) - mask = PIL.Image.fromarray(mask) - except ValueError as err: - raise err - else: - cutout = naive_cutout(img, mask) - - if color is True: - color = get_background_dominant_color(img, mask) - cutout = apply_background_color(cutout, color) - elif isinstance(color, str): - r, g, b = ImageColor.getcolor(color, "RGB") - cutout = apply_background_color(cutout, (r, g, b, 255)) - - return cutout, mask - - -def make_label(text, width=600, height=200, color="black") -> PILImage: - image = Image.new("RGB", (width, height), color) - draw = ImageDraw.Draw(image) - text_width, text_height = draw.textsize(text) - draw.text(((width-text_width)/2, height/2), text) - return image diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py deleted file mode 100644 index 042dac813e74b8187c3754cb9a937c7f7183e331..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py +++ /dev/null @@ -1,59 +0,0 @@ -import hashlib -import logging -import sys -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.utils.hashes import FAVORITE_HASH, STRONG_HASHES -from pip._internal.utils.misc import read_chunks, write_output - -logger = logging.getLogger(__name__) - - -class HashCommand(Command): - """ - Compute a hash of a local package archive. - - These can be used with --hash in a requirements file to do repeatable - installs. - """ - - usage = "%prog [options] ..." - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-a", - "--algorithm", - dest="algorithm", - choices=STRONG_HASHES, - action="store", - default=FAVORITE_HASH, - help="The hash algorithm to use: one of {}".format( - ", ".join(STRONG_HASHES) - ), - ) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - self.parser.print_usage(sys.stderr) - return ERROR - - algorithm = options.algorithm - for path in args: - write_output( - "%s:\n--hash=%s:%s", path, algorithm, _hash_of_file(path, algorithm) - ) - return SUCCESS - - -def _hash_of_file(path: str, algorithm: str) -> str: - """Return the hash digest of a file.""" - with open(path, "rb") as archive: - hash = hashlib.new(algorithm) - for chunk in read_chunks(archive): - hash.update(chunk) - return hash.hexdigest() diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/packages.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/packages.py deleted file mode 100644 index 9582fa730f121634348a79c1a8b0cc2df99c616f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/packages.py +++ /dev/null @@ -1,16 +0,0 @@ -import sys - -# This code exists for backwards compatibility reasons. -# I don't like it either. Just look the other way. :) - -for package in ('urllib3', 'idna', 'chardet'): - vendored_package = "pip._vendor." + package - locals()[package] = __import__(vendored_package) - # This traversal is apparently necessary such that the identities are - # preserved (requests.packages.urllib3.* is urllib3.*) - for mod in list(sys.modules): - if mod == vendored_package or mod.startswith(vendored_package + '.'): - unprefixed_mod = mod[len("pip._vendor."):] - sys.modules['pip._vendor.requests.packages.' + unprefixed_mod] = sys.modules[mod] - -# Kinda cool, though, right? diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/_framework_compat.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/_framework_compat.py deleted file mode 100644 index cffa27cb08285d1535e9812858dbad1551fc972f..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/_framework_compat.py +++ /dev/null @@ -1,55 +0,0 @@ -""" -Backward compatibility for homebrew builds on macOS. -""" - - -import sys -import os -import functools -import subprocess -import sysconfig - - -@functools.lru_cache() -def enabled(): - """ - Only enabled for Python 3.9 framework homebrew builds - except ensurepip and venv. - """ - PY39 = (3, 9) < sys.version_info < (3, 10) - framework = sys.platform == 'darwin' and sys._framework - homebrew = "Cellar" in sysconfig.get_config_var('projectbase') - venv = sys.prefix != sys.base_prefix - ensurepip = os.environ.get("ENSUREPIP_OPTIONS") - return PY39 and framework and homebrew and not venv and not ensurepip - - -schemes = dict( - osx_framework_library=dict( - stdlib='{installed_base}/{platlibdir}/python{py_version_short}', - platstdlib='{platbase}/{platlibdir}/python{py_version_short}', - purelib='{homebrew_prefix}/lib/python{py_version_short}/site-packages', - platlib='{homebrew_prefix}/{platlibdir}/python{py_version_short}/site-packages', - include='{installed_base}/include/python{py_version_short}{abiflags}', - platinclude='{installed_platbase}/include/python{py_version_short}{abiflags}', - scripts='{homebrew_prefix}/bin', - data='{homebrew_prefix}', - ) -) - - -@functools.lru_cache() -def vars(): - if not enabled(): - return {} - homebrew_prefix = subprocess.check_output(['brew', '--prefix'], text=True).strip() - return locals() - - -def scheme(name): - """ - Override the selected scheme for posix_prefix. - """ - if not enabled() or not name.endswith('_prefix'): - return name - return 'osx_framework_library' diff --git a/spaces/polymath707/bigscience-bloomz-7b1/README.md b/spaces/polymath707/bigscience-bloomz-7b1/README.md deleted file mode 100644 index 185af2ae3dc0b7b89c6b007c8d6a8b45302b2803..0000000000000000000000000000000000000000 --- a/spaces/polymath707/bigscience-bloomz-7b1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bigscience Bloomz 7b1 -emoji: 💻 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pompuritz/keroppurin/README.md b/spaces/pompuritz/keroppurin/README.md deleted file mode 100644 index 9559152e86a04564df6a5af9146d041175485e23..0000000000000000000000000000000000000000 --- a/spaces/pompuritz/keroppurin/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Keroppurin -emoji: 📉 -colorFrom: yellow -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/README.md b/spaces/power2/JoJoGan-powerhow2/e4e/README.md deleted file mode 100644 index 14b6bc701b2bad3c2fc7b1d9b36f1892681ded5f..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/e4e/README.md +++ /dev/null @@ -1,142 +0,0 @@ -# Designing an Encoder for StyleGAN Image Manipulation - - - [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](http://colab.research.google.com/github/omertov/encoder4editing/blob/main/notebooks/inference_playground.ipynb) - -> Recently, there has been a surge of diverse methods for performing image editing by employing pre-trained unconditional generators. Applying these methods on real images, however, remains a challenge, as it necessarily requires the inversion of the images into their latent space. To successfully invert a real image, one needs to find a latent code that reconstructs the input image accurately, and more importantly, allows for its meaningful manipulation. In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator. We identify and analyze the existence of a distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space. We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on. We present an encoder based on our two principles that is specifically designed for facilitating editing on real images by balancing these tradeoffs. By evaluating its performance qualitatively and quantitatively on numerous challenging domains, including cars and horses, we show that our inversion method, followed by common editing techniques, achieves superior real-image editing quality, with only a small reconstruction accuracy drop. - -

            - -

            - -## Description -Official Implementation of "Designing an Encoder for StyleGAN Image Manipulation" paper for both training and evaluation. -The e4e encoder is specifically designed to complement existing image manipulation techniques performed over StyleGAN's latent space. - -## Recent Updates -`2021.03.25`: Add pose editing direction. - -## Getting Started -### Prerequisites -- Linux or macOS -- NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported) -- Python 3 - -### Installation -- Clone the repository: -``` -git clone https://github.com/omertov/encoder4editing.git -cd encoder4editing -``` -- Dependencies: -We recommend running this repository using [Anaconda](https://docs.anaconda.com/anaconda/install/). -All dependencies for defining the environment are provided in `environment/e4e_env.yaml`. - -### Inference Notebook -We provide a Jupyter notebook found in `notebooks/inference_playground.ipynb` that allows one to encode and perform several editings on real images using StyleGAN. - -### Pretrained Models -Please download the pre-trained models from the following links. Each e4e model contains the entire pSp framework architecture, including the encoder and decoder weights. -| Path | Description -| :--- | :---------- -|[FFHQ Inversion](https://drive.google.com/file/d/1cUv_reLE6k3604or78EranS7XzuVMWeO/view?usp=sharing) | FFHQ e4e encoder. -|[Cars Inversion](https://drive.google.com/file/d/17faPqBce2m1AQeLCLHUVXaDfxMRU2QcV/view?usp=sharing) | Cars e4e encoder. -|[Horse Inversion](https://drive.google.com/file/d/1TkLLnuX86B_BMo2ocYD0kX9kWh53rUVX/view?usp=sharing) | Horse e4e encoder. -|[Church Inversion](https://drive.google.com/file/d/1-L0ZdnQLwtdy6-A_Ccgq5uNJGTqE7qBa/view?usp=sharing) | Church e4e encoder. - -If you wish to use one of the pretrained models for training or inference, you may do so using the flag `--checkpoint_path`. - -In addition, we provide various auxiliary models needed for training your own e4e model from scratch. -| Path | Description -| :--- | :---------- -|[FFHQ StyleGAN](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing) | StyleGAN model pretrained on FFHQ taken from [rosinality](https://github.com/rosinality/stylegan2-pytorch) with 1024x1024 output resolution. -|[IR-SE50 Model](https://drive.google.com/file/d/1KW7bjndL3QG3sxBbZxreGHigcCCpsDgn/view?usp=sharing) | Pretrained IR-SE50 model taken from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) for use in our ID loss during training. -|[MOCOv2 Model](https://drive.google.com/file/d/18rLcNGdteX5LwT7sv_F7HWr12HpVEzVe/view?usp=sharing) | Pretrained ResNet-50 model trained using MOCOv2 for use in our simmilarity loss for domains other then human faces during training. - -By default, we assume that all auxiliary models are downloaded and saved to the directory `pretrained_models`. However, you may use your own paths by changing the necessary values in `configs/path_configs.py`. - -## Training -To train the e4e encoder, make sure the paths to the required models, as well as training and testing data is configured in `configs/path_configs.py` and `configs/data_configs.py`. -#### **Training the e4e Encoder** -``` -python scripts/train.py \ ---dataset_type cars_encode \ ---exp_dir new/experiment/directory \ ---start_from_latent_avg \ ---use_w_pool \ ---w_discriminator_lambda 0.1 \ ---progressive_start 20000 \ ---id_lambda 0.5 \ ---val_interval 10000 \ ---max_steps 200000 \ ---stylegan_size 512 \ ---stylegan_weights path/to/pretrained/stylegan.pt \ ---workers 8 \ ---batch_size 8 \ ---test_batch_size 4 \ ---test_workers 4 -``` - -#### Training on your own dataset -In order to train the e4e encoder on a custom dataset, perform the following adjustments: -1. Insert the paths to your train and test data into the `dataset_paths` variable defined in `configs/paths_config.py`: -``` -dataset_paths = { - 'my_train_data': '/path/to/train/images/directory', - 'my_test_data': '/path/to/test/images/directory' -} -``` -2. Configure a new dataset under the DATASETS variable defined in `configs/data_configs.py`: -``` -DATASETS = { - 'my_data_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['my_train_data'], - 'train_target_root': dataset_paths['my_train_data'], - 'test_source_root': dataset_paths['my_test_data'], - 'test_target_root': dataset_paths['my_test_data'] - } -} -``` -Refer to `configs/transforms_config.py` for the transformations applied to the train and test images during training. - -3. Finally, run a training session with `--dataset_type my_data_encode`. - -## Inference -Having trained your model, you can use `scripts/inference.py` to apply the model on a set of images. -For example, -``` -python scripts/inference.py \ ---images_dir=/path/to/images/directory \ ---save_dir=/path/to/saving/directory \ -path/to/checkpoint.pt -``` - -## Latent Editing Consistency (LEC) -As described in the paper, we suggest a new metric, Latent Editing Consistency (LEC), for evaluating the encoder's -performance. -We provide an example for calculating the metric over the FFHQ StyleGAN using the aging editing direction in -`metrics/LEC.py`. - -To run the example: -``` -cd metrics -python LEC.py \ ---images_dir=/path/to/images/directory \ -path/to/checkpoint.pt -``` - -## Acknowledgments -This code borrows heavily from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel) - -## Citation -If you use this code for your research, please cite our paper Designing an Encoder for StyleGAN Image Manipulation: - -``` -@article{tov2021designing, - title={Designing an Encoder for StyleGAN Image Manipulation}, - author={Tov, Omer and Alaluf, Yuval and Nitzan, Yotam and Patashnik, Or and Cohen-Or, Daniel}, - journal={arXiv preprint arXiv:2102.02766}, - year={2021} -} -``` diff --git a/spaces/prerna9811/Chord/portaudio/src/common/pa_trace.c b/spaces/prerna9811/Chord/portaudio/src/common/pa_trace.c deleted file mode 100644 index 6763dfacff401e5b4543725728cb30eab41d03b9..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/common/pa_trace.c +++ /dev/null @@ -1,238 +0,0 @@ -/* - * $Id$ - * Portable Audio I/O Library Trace Facility - * Store trace information in real-time for later printing. - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2000 Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Real-time safe event trace logging facility for debugging. -*/ - - -#include -#include -#include -#include -#include -#include "pa_trace.h" -#include "pa_util.h" -#include "pa_debugprint.h" - -#if PA_TRACE_REALTIME_EVENTS - -static char const *traceTextArray[PA_MAX_TRACE_RECORDS]; -static int traceIntArray[PA_MAX_TRACE_RECORDS]; -static int traceIndex = 0; -static int traceBlock = 0; - -/*********************************************************************/ -void PaUtil_ResetTraceMessages() -{ - traceIndex = 0; -} - -/*********************************************************************/ -void PaUtil_DumpTraceMessages() -{ - int i; - int messageCount = (traceIndex < PA_MAX_TRACE_RECORDS) ? traceIndex : PA_MAX_TRACE_RECORDS; - - printf("DumpTraceMessages: traceIndex = %d\n", traceIndex ); - for( i=0; idata = (char*)PaUtil_AllocateMemory(maxSizeInBytes); - if (pLog->data == 0) - { - PaUtil_FreeMemory(pLog); - return paInsufficientMemory; - } - pLog->magik = kMagik; - pLog->size = maxSizeInBytes; - pLog->refTime = PaUtil_GetTime(); - return paNoError; -} - -void PaUtil_ResetHighSpeedLogTimeRef( LogHandle hLog ) -{ - PaHighPerformanceLog* pLog = (PaHighPerformanceLog*)hLog; - assert(pLog->magik == kMagik); - pLog->refTime = PaUtil_GetTime(); -} - -typedef struct __PaLogEntryHeader -{ - int size; - double timeStamp; -} PaLogEntryHeader; - -#ifdef __APPLE__ -#define _vsnprintf vsnprintf -#define min(a,b) ((a)<(b)?(a):(b)) -#endif - - -int PaUtil_AddHighSpeedLogMessage( LogHandle hLog, const char* fmt, ... ) -{ - va_list l; - int n = 0; - PaHighPerformanceLog* pLog = (PaHighPerformanceLog*)hLog; - if (pLog != 0) - { - PaLogEntryHeader* pHeader; - char* p; - int maxN; - assert(pLog->magik == kMagik); - pHeader = (PaLogEntryHeader*)( pLog->data + pLog->writePtr ); - p = (char*)( pHeader + 1 ); - maxN = pLog->size - pLog->writePtr - 2 * sizeof(PaLogEntryHeader); - - pHeader->timeStamp = PaUtil_GetTime() - pLog->refTime; - if (maxN > 0) - { - if (maxN > 32) - { - va_start(l, fmt); - n = _vsnprintf(p, min(1024, maxN), fmt, l); - va_end(l); - } - else { - n = sprintf(p, "End of log..."); - } - n = ((n + sizeof(unsigned)) & ~(sizeof(unsigned)-1)) + sizeof(PaLogEntryHeader); - pHeader->size = n; -#if 0 - PaUtil_DebugPrint("%05u.%03u: %s\n", pHeader->timeStamp/1000, pHeader->timeStamp%1000, p); -#endif - pLog->writePtr += n; - } - } - return n; -} - -void PaUtil_DumpHighSpeedLog( LogHandle hLog, const char* fileName ) -{ - FILE* f = (fileName != NULL) ? fopen(fileName, "w") : stdout; - unsigned localWritePtr; - PaHighPerformanceLog* pLog = (PaHighPerformanceLog*)hLog; - assert(pLog->magik == kMagik); - localWritePtr = pLog->writePtr; - while (pLog->readPtr != localWritePtr) - { - const PaLogEntryHeader* pHeader = (const PaLogEntryHeader*)( pLog->data + pLog->readPtr ); - const char* p = (const char*)( pHeader + 1 ); - const PaUint64 ts = (const PaUint64)( pHeader->timeStamp * USEC_PER_SEC ); - assert(pHeader->size < (1024+sizeof(unsigned)+sizeof(PaLogEntryHeader))); - fprintf(f, "%05u.%03u: %s\n", (unsigned)(ts/1000), (unsigned)(ts%1000), p); - pLog->readPtr += pHeader->size; - } - if (f != stdout) - { - fclose(f); - } -} - -void PaUtil_DiscardHighSpeedLog( LogHandle hLog ) -{ - PaHighPerformanceLog* pLog = (PaHighPerformanceLog*)hLog; - assert(pLog->magik == kMagik); - PaUtil_FreeMemory(pLog->data); - PaUtil_FreeMemory(pLog); -} - -#else -/* This stub was added so that this file will generate a symbol. - * Otherwise linker/archiver programs will complain. - */ -int PaUtil_TraceStubToSatisfyLinker(void) -{ - return 0; -} -#endif /* TRACE_REALTIME_EVENTS */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/index.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/index.ts deleted file mode 100644 index 201102d49e08cced6f4644eca3b5c0b6666650ff..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/wasm/src/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { WorkerProxy, type WorkerProxyOptions } from "./worker-proxy"; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-a2897682.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-a2897682.js deleted file mode 100644 index 01c410f4544fc6ae1778879e3cdd7c7d2b6442a2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-a2897682.js +++ /dev/null @@ -1,2 +0,0 @@ -import{B as Me}from"./Button-89057c03.js";import{S as Ne}from"./Index-37584f50.js";import{B as Oe}from"./BlockLabel-e3b0d1c3.js";import{I as Ue}from"./IconButton-16e5dbea.js";import{E as Fe}from"./Empty-937365d8.js";import{S as Pe}from"./ShareButton-d3fa81fa.js";import{D as Xe}from"./Download-696bd40c.js";import{I as je}from"./Image-eaba773f.js";import{n as We}from"./index-0526d562.js";/* empty css */import{M as Je}from"./ModifyUpload-87a26b2d.js";import{u as Ke}from"./utils-c3e3db58.js";import"./svelte/svelte.js";import"./Clear-2c7bae91.js";var se=Object.prototype.hasOwnProperty;function ue(e,l,t){for(t of e.keys())if(X(t,l))return t}function X(e,l){var t,n,i;if(e===l)return!0;if(e&&l&&(t=e.constructor)===l.constructor){if(t===Date)return e.getTime()===l.getTime();if(t===RegExp)return e.toString()===l.toString();if(t===Array){if((n=e.length)===l.length)for(;n--&&X(e[n],l[n]););return n===-1}if(t===Set){if(e.size!==l.size)return!1;for(n of e)if(i=n,i&&typeof i=="object"&&(i=ue(l,i),!i)||!l.has(i))return!1;return!0}if(t===Map){if(e.size!==l.size)return!1;for(n of e)if(i=n[0],i&&typeof i=="object"&&(i=ue(l,i),!i)||!X(n[1],l.get(i)))return!1;return!0}if(t===ArrayBuffer)e=new Uint8Array(e),l=new Uint8Array(l);else if(t===DataView){if((n=e.byteLength)===l.byteLength)for(;n--&&e.getInt8(n)===l.getInt8(n););return n===-1}if(ArrayBuffer.isView(e)){if((n=e.byteLength)===l.byteLength)for(;n--&&e[n]===l[n];);return n===-1}if(!t||typeof e=="object"){n=0;for(t in e)if(se.call(e,t)&&++n&&!se.call(l,t)||!(t in l)||!X(e[t],l[t]))return!1;return Object.keys(l).length===n}}return e!==e&&l!==l}async function Qe(e){return e?`
            ${(await Promise.all(e.map(async([t,n])=>t===null||!t.url?"":await Ke(t.url,"url")))).map(t=>``).join("")}
            `:""}const{SvelteComponent:Ye,add_iframe_resize_listener:Ze,add_render_callback:ze,append:S,attr:d,binding_callbacks:_e,bubble:ce,check_outros:W,create_component:O,destroy_component:U,destroy_each:Be,detach:T,element:D,empty:xe,ensure_array_like:Q,globals:$e,group_outros:J,init:el,insert:L,listen:K,mount_component:F,run_all:ll,safe_not_equal:tl,set_data:Se,set_style:q,space:C,src_url_equal:N,text:Ae,toggle_class:E,transition_in:B,transition_out:A}=window.__gradio__svelte__internal,{window:De}=$e,{createEventDispatcher:nl}=window.__gradio__svelte__internal,{tick:il}=window.__gradio__svelte__internal;function he(e,l,t){const n=e.slice();return n[39]=l[t],n[41]=t,n}function me(e,l,t){const n=e.slice();return n[42]=l[t],n[43]=l,n[41]=t,n}function ge(e){let l,t;return l=new Oe({props:{show_label:e[1],Icon:je,label:e[2]||"Gallery"}}),{c(){O(l.$$.fragment)},m(n,i){F(l,n,i),t=!0},p(n,i){const r={};i[0]&2&&(r.show_label=n[1]),i[0]&4&&(r.label=n[2]||"Gallery"),l.$set(r)},i(n){t||(B(l.$$.fragment,n),t=!0)},o(n){A(l.$$.fragment,n),t=!1},d(n){U(l,n)}}}function ol(e){let l,t,n,i,r,m,u=e[0]!==null&&e[7]&&de(e),_=e[9]&&ve(e),w=Q(e[12]),o=[];for(let a=0;ae[34].call(t)),E(t,"fixed-height",!e[6]||e[6]=="auto")},m(a,c){u&&u.m(a,c),L(a,l,c),L(a,t,c),S(t,n),_&&_.m(n,null),S(n,i);for(let f=0;f{u=null}),W()),a[9]?_?(_.p(a,c),c[0]&512&&B(_,1)):(_=ve(a),_.c(),B(_,1),_.m(n,i)):_&&(J(),A(_,1,1,()=>{_=null}),W()),c[0]&4097){w=Q(a[12]);let f;for(f=0;f{v=null}),W());const I={};if(y[0]&2048&&(I.i18n=g[11]),i.$set(I),(!k||y[0]&4097&&!N(u.src,_=g[12][g[0]].image.path))&&d(u,"src",_),(!k||y[0]&4097&&w!==(w=g[12][g[0]].caption||""))&&d(u,"alt",w),(!k||y[0]&4097&&o!==(o=g[12][g[0]].caption||null))&&d(u,"title",o),(!k||y[0]&4097)&&E(u,"with-caption",!!g[12][g[0]].caption),(!k||y[0]&4097)&&q(m,"height","calc(100% - "+(g[12][g[0]].caption?"80px":"60px")+")"),g[12][g[0]]?.caption?z?z.p(g,y):(z=be(g),z.c(),z.m(l,c)):z&&(z.d(1),z=null),y[0]&12289){b=Q(g[12]);let j;for(j=0;je[28](l,u),a=()=>e[28](null,u);function c(){return e[29](e[41])}return{c(){l=D("button"),t=D("img"),r=C(),N(t.src,n=e[42].image.path)||d(t,"src",n),d(t,"title",i=e[42].caption||null),d(t,"alt",""),d(t,"loading","lazy"),d(t,"class","svelte-fiatpe"),d(l,"class","thumbnail-item thumbnail-small svelte-fiatpe"),d(l,"aria-label",m="Thumbnail "+(e[41]+1)+" of "+e[12].length),E(l,"selected",e[0]===e[41])},m(f,k){L(f,l,k),S(l,t),S(l,r),o(),_||(w=K(l,"click",c),_=!0)},p(f,k){e=f,k[0]&4096&&!N(t.src,n=e[42].image.path)&&d(t,"src",n),k[0]&4096&&i!==(i=e[42].caption||null)&&d(t,"title",i),k[0]&4096&&m!==(m="Thumbnail "+(e[41]+1)+" of "+e[12].length)&&d(l,"aria-label",m),u!==e[41]&&(a(),u=e[41],o()),k[0]&1&&E(l,"selected",e[0]===e[41])},d(f){f&&T(l),a(),_=!1,w()}}}function ve(e){let l,t,n;return t=new Pe({props:{i18n:e[11],value:e[12],formatter:Qe}}),t.$on("share",e[31]),t.$on("error",e[32]),{c(){l=D("div"),O(t.$$.fragment),d(l,"class","icon-button svelte-fiatpe")},m(i,r){L(i,l,r),F(t,l,null),n=!0},p(i,r){const m={};r[0]&2048&&(m.i18n=i[11]),r[0]&4096&&(m.value=i[12]),t.$set(m)},i(i){n||(B(t.$$.fragment,i),n=!0)},o(i){A(t.$$.fragment,i),n=!1},d(i){i&&T(l),U(t)}}}function pe(e){let l,t=e[39].caption+"",n;return{c(){l=D("div"),n=Ae(t),d(l,"class","caption-label svelte-fiatpe")},m(i,r){L(i,l,r),S(l,n)},p(i,r){r[0]&4096&&t!==(t=i[39].caption+"")&&Se(n,t)},d(i){i&&T(l)}}}function ye(e){let l,t,n,i,r,m,u,_,w,o=e[39].caption&&pe(e);function a(){return e[33](e[41])}return{c(){l=D("button"),t=D("img"),r=C(),o&&o.c(),m=C(),d(t,"alt",n=e[39].caption||""),N(t.src,i=typeof e[39].image=="string"?e[39].image:e[39].image.url)||d(t,"src",i),d(t,"loading","lazy"),d(t,"class","svelte-fiatpe"),d(l,"class","thumbnail-item thumbnail-lg svelte-fiatpe"),d(l,"aria-label",u="Thumbnail "+(e[41]+1)+" of "+e[12].length),E(l,"selected",e[0]===e[41])},m(c,f){L(c,l,f),S(l,t),S(l,r),o&&o.m(l,null),S(l,m),_||(w=K(l,"click",a),_=!0)},p(c,f){e=c,f[0]&4096&&n!==(n=e[39].caption||"")&&d(t,"alt",n),f[0]&4096&&!N(t.src,i=typeof e[39].image=="string"?e[39].image:e[39].image.url)&&d(t,"src",i),e[39].caption?o?o.p(e,f):(o=pe(e),o.c(),o.m(l,m)):o&&(o.d(1),o=null),f[0]&4096&&u!==(u="Thumbnail "+(e[41]+1)+" of "+e[12].length)&&d(l,"aria-label",u),f[0]&1&&E(l,"selected",e[0]===e[41])},d(c){c&&T(l),o&&o.d(),_=!1,w()}}}function fl(e){let l,t;return l=new je({}),{c(){O(l.$$.fragment)},m(n,i){F(l,n,i),t=!0},i(n){t||(B(l.$$.fragment,n),t=!0)},o(n){A(l.$$.fragment,n),t=!1},d(n){U(l,n)}}}function rl(e){let l,t,n,i,r,m,u;ze(e[25]);let _=e[1]&&ge(e);const w=[al,ol],o=[];function a(c,f){return c[3]===null||c[12]===null||c[12].length===0?0:1}return t=a(e),n=o[t]=w[t](e),{c(){_&&_.c(),l=C(),n.c(),i=xe()},m(c,f){_&&_.m(c,f),L(c,l,f),o[t].m(c,f),L(c,i,f),r=!0,m||(u=K(De,"resize",e[25]),m=!0)},p(c,f){c[1]?_?(_.p(c,f),f[0]&2&&B(_,1)):(_=ge(c),_.c(),B(_,1),_.m(l.parentNode,l)):_&&(J(),A(_,1,1,()=>{_=null}),W());let k=t;t=a(c),t===k?o[t].p(c,f):(J(),A(o[k],1,1,()=>{o[k]=null}),W(),n=o[t],n?n.p(c,f):(n=o[t]=w[t](c),n.c()),B(n,1),n.m(i.parentNode,i))},i(c){r||(B(_),B(n),r=!0)},o(c){A(_),A(n),r=!1},d(c){c&&(T(l),T(i)),_&&_.d(c),o[t].d(c),m=!1,u()}}}function sl(e){return typeof e=="object"&&e!==null&&"data"in e}function ee(e){return sl(e)?e.path:typeof e=="string"?e:Array.isArray(e)?ee(e[0]):""}function ul(e,l,t){let n,i,{show_label:r=!0}=l,{label:m}=l,{root:u=""}=l,{proxy_url:_=null}=l,{value:w=null}=l,{columns:o=[2]}=l,{rows:a=void 0}=l,{height:c="auto"}=l,{preview:f}=l,{allow_preview:k=!0}=l,{object_fit:H="cover"}=l,{show_share_button:R=!1}=l,{show_download_button:v=!1}=l,{i18n:z}=l,{selected_index:b=null}=l;const p=nl();let g=!0,y=null,I=w;b===null&&f&&w?.length&&(b=0);let j=b;function M(s){const P=s.target,x=s.clientX,$=P.offsetWidth/2;x<$?t(0,b=n):t(0,b=i)}function Y(s){switch(s.code){case"Escape":s.preventDefault(),t(0,b=null);break;case"ArrowLeft":s.preventDefault(),t(0,b=n);break;case"ArrowRight":s.preventDefault(),t(0,b=i);break}}let V=[],G;async function Z(s){if(typeof s!="number"||(await il(),V[s]===void 0))return;V[s]?.focus();const{left:P,width:x}=G.getBoundingClientRect(),{left:fe,width:$}=V[s].getBoundingClientRect(),re=fe-P+$/2-x/2+G.scrollLeft;G&&typeof G.scrollTo=="function"&&G.scrollTo({left:re<0?0:re,behavior:"smooth"})}let h=0,ae=0;function Ie(){t(16,ae=De.innerHeight)}const Te=()=>t(0,b=null),Le=s=>M(s);function Ge(s,P){_e[s?"unshift":"push"](()=>{V[P]=s,t(13,V)})}const qe=s=>t(0,b=s);function Ce(s){_e[s?"unshift":"push"](()=>{G=s,t(14,G)})}function Ee(s){ce.call(this,e,s)}function He(s){ce.call(this,e,s)}const Re=s=>t(0,b=s);function Ve(){h=this.clientHeight,t(15,h)}return e.$$set=s=>{"show_label"in s&&t(1,r=s.show_label),"label"in s&&t(2,m=s.label),"root"in s&&t(19,u=s.root),"proxy_url"in s&&t(20,_=s.proxy_url),"value"in s&&t(3,w=s.value),"columns"in s&&t(4,o=s.columns),"rows"in s&&t(5,a=s.rows),"height"in s&&t(6,c=s.height),"preview"in s&&t(21,f=s.preview),"allow_preview"in s&&t(7,k=s.allow_preview),"object_fit"in s&&t(8,H=s.object_fit),"show_share_button"in s&&t(9,R=s.show_share_button),"show_download_button"in s&&t(10,v=s.show_download_button),"i18n"in s&&t(11,z=s.i18n),"selected_index"in s&&t(0,b=s.selected_index)},e.$$.update=()=>{e.$$.dirty[0]&4194312&&t(22,g=w==null||w.length==0?!0:g),e.$$.dirty[0]&1572872&&t(12,y=w===null?null:w.map(s=>({image:We(s.image,u,_),caption:s.caption}))),e.$$.dirty[0]&14680073&&(X(I,w)||(g?(t(0,b=f&&w?.length?0:null),t(22,g=!1)):t(0,b=b!==null&&w!==null&&bdl(n,"selected_index",_)),n.$on("change",e[23]),n.$on("select",e[24]),n.$on("share",e[25]),n.$on("error",e[26]),{c(){le(l.$$.fragment),t=zl(),le(n.$$.fragment)},m(o,a){ne(l,o,a),yl(o,t,a),ne(n,o,a),r=!0},p(o,a){const c=a&2097154?vl(m,[a&2097152&&{autoscroll:o[21].autoscroll},a&2097152&&{i18n:o[21].i18n},a&2&&kl(o[1])]):{};l.$set(c);const f={};a&8&&(f.label=o[3]),a&512&&(f.value=o[9]),a&4&&(f.show_label=o[2]),a&16&&(f.root=o[4]),a&32&&(f.proxy_url=o[5]),a&8192&&(f.columns=o[13]),a&16384&&(f.rows=o[14]),a&32768&&(f.height=o[15]),a&65536&&(f.preview=o[16]),a&262144&&(f.object_fit=o[18]),a&131072&&(f.allow_preview=o[17]),a&524288&&(f.show_share_button=o[19]),a&1048576&&(f.show_download_button=o[20]),a&2097152&&(f.i18n=o[21].i18n),!i&&a&1&&(i=!0,f.selected_index=o[0],ml(()=>i=!1)),n.$set(f)},i(o){r||(ie(l.$$.fragment,o),ie(n.$$.fragment,o),r=!0)},o(o){oe(l.$$.fragment,o),oe(n.$$.fragment,o),r=!1},d(o){o&&bl(t),te(l,o),te(n,o)}}}function Sl(e){let l,t;return l=new Me({props:{visible:e[8],variant:"solid",padding:!1,elem_id:e[6],elem_classes:e[7],container:e[10],scale:e[11],min_width:e[12],allow_overflow:!1,height:typeof e[15]=="number"?e[15]:void 0,$$slots:{default:[Bl]},$$scope:{ctx:e}}}),{c(){le(l.$$.fragment)},m(n,i){ne(l,n,i),t=!0},p(n,[i]){const r={};i&256&&(r.visible=n[8]),i&64&&(r.elem_id=n[6]),i&128&&(r.elem_classes=n[7]),i&1024&&(r.container=n[10]),i&2048&&(r.scale=n[11]),i&4096&&(r.min_width=n[12]),i&32768&&(r.height=typeof n[15]=="number"?n[15]:void 0),i&138404415&&(r.$$scope={dirty:i,ctx:n}),l.$set(r)},i(n){t||(ie(l.$$.fragment,n),t=!0)},o(n){oe(l.$$.fragment,n),t=!1},d(n){te(l,n)}}}function Al(e,l,t){let{loading_status:n}=l,{show_label:i}=l,{label:r}=l,{root:m}=l,{proxy_url:u}=l,{elem_id:_=""}=l,{elem_classes:w=[]}=l,{visible:o=!0}=l,{value:a=null}=l,{container:c=!0}=l,{scale:f=null}=l,{min_width:k=void 0}=l,{columns:H=[2]}=l,{rows:R=void 0}=l,{height:v="auto"}=l,{preview:z}=l,{allow_preview:b=!0}=l,{selected_index:p=null}=l,{object_fit:g="cover"}=l,{show_share_button:y=!1}=l,{show_download_button:I=!1}=l,{gradio:j}=l;function M(h){p=h,t(0,p)}const Y=()=>j.dispatch("change",a),V=h=>j.dispatch("select",h.detail),G=h=>j.dispatch("share",h.detail),Z=h=>j.dispatch("error",h.detail);return e.$$set=h=>{"loading_status"in h&&t(1,n=h.loading_status),"show_label"in h&&t(2,i=h.show_label),"label"in h&&t(3,r=h.label),"root"in h&&t(4,m=h.root),"proxy_url"in h&&t(5,u=h.proxy_url),"elem_id"in h&&t(6,_=h.elem_id),"elem_classes"in h&&t(7,w=h.elem_classes),"visible"in h&&t(8,o=h.visible),"value"in h&&t(9,a=h.value),"container"in h&&t(10,c=h.container),"scale"in h&&t(11,f=h.scale),"min_width"in h&&t(12,k=h.min_width),"columns"in h&&t(13,H=h.columns),"rows"in h&&t(14,R=h.rows),"height"in h&&t(15,v=h.height),"preview"in h&&t(16,z=h.preview),"allow_preview"in h&&t(17,b=h.allow_preview),"selected_index"in h&&t(0,p=h.selected_index),"object_fit"in h&&t(18,g=h.object_fit),"show_share_button"in h&&t(19,y=h.show_share_button),"show_download_button"in h&&t(20,I=h.show_download_button),"gradio"in h&&t(21,j=h.gradio)},[p,n,i,r,m,u,_,w,o,a,c,f,k,H,R,v,z,b,g,y,I,j,M,Y,V,G,Z]}class Fl extends hl{constructor(l){super(),pl(this,l,Al,Sl,jl,{loading_status:1,show_label:2,label:3,root:4,proxy_url:5,elem_id:6,elem_classes:7,visible:8,value:9,container:10,scale:11,min_width:12,columns:13,rows:14,height:15,preview:16,allow_preview:17,selected_index:0,object_fit:18,show_share_button:19,show_download_button:20,gradio:21})}}export{cl as BaseGallery,Fl as default}; -//# sourceMappingURL=Index-a2897682.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_multiarray_umath.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_multiarray_umath.py deleted file mode 100644 index 7ce48fcb258d56855ffd104e0bb1cd4aafba9de2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/_core/_multiarray_umath.py +++ /dev/null @@ -1,6 +0,0 @@ -from numpy.core import _multiarray_umath - -_globals = globals() - -for item in _multiarray_umath.__dir__(): - _globals[item] = getattr(_multiarray_umath, item) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/accesstype.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/accesstype.f90 deleted file mode 100644 index e2cbd445daf57f21e2d727f42a3891ec28725175..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/accesstype.f90 +++ /dev/null @@ -1,13 +0,0 @@ -module foo - public - type, private, bind(c) :: a - integer :: i - end type a - type, bind(c) :: b_ - integer :: j - end type b_ - public :: b_ - type :: c - integer :: k - end type c -end module foo diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/test_pivot_multilevel.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/test_pivot_multilevel.py deleted file mode 100644 index 08ef29440825f006bf53eea7f21f0809bff99908..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/test_pivot_multilevel.py +++ /dev/null @@ -1,254 +0,0 @@ -import numpy as np -import pytest - -from pandas._libs import lib - -import pandas as pd -from pandas import ( - Index, - MultiIndex, -) -import pandas._testing as tm - - -@pytest.mark.parametrize( - "input_index, input_columns, input_values, " - "expected_values, expected_columns, expected_index", - [ - ( - ["lev4"], - "lev3", - "values", - [ - [0.0, np.nan], - [np.nan, 1.0], - [2.0, np.nan], - [np.nan, 3.0], - [4.0, np.nan], - [np.nan, 5.0], - [6.0, np.nan], - [np.nan, 7.0], - ], - Index([1, 2], name="lev3"), - Index([1, 2, 3, 4, 5, 6, 7, 8], name="lev4"), - ), - ( - ["lev4"], - "lev3", - lib.no_default, - [ - [1.0, np.nan, 1.0, np.nan, 0.0, np.nan], - [np.nan, 1.0, np.nan, 1.0, np.nan, 1.0], - [1.0, np.nan, 2.0, np.nan, 2.0, np.nan], - [np.nan, 1.0, np.nan, 2.0, np.nan, 3.0], - [2.0, np.nan, 1.0, np.nan, 4.0, np.nan], - [np.nan, 2.0, np.nan, 1.0, np.nan, 5.0], - [2.0, np.nan, 2.0, np.nan, 6.0, np.nan], - [np.nan, 2.0, np.nan, 2.0, np.nan, 7.0], - ], - MultiIndex.from_tuples( - [ - ("lev1", 1), - ("lev1", 2), - ("lev2", 1), - ("lev2", 2), - ("values", 1), - ("values", 2), - ], - names=[None, "lev3"], - ), - Index([1, 2, 3, 4, 5, 6, 7, 8], name="lev4"), - ), - ( - ["lev1", "lev2"], - "lev3", - "values", - [[0, 1], [2, 3], [4, 5], [6, 7]], - Index([1, 2], name="lev3"), - MultiIndex.from_tuples( - [(1, 1), (1, 2), (2, 1), (2, 2)], names=["lev1", "lev2"] - ), - ), - ( - ["lev1", "lev2"], - "lev3", - lib.no_default, - [[1, 2, 0, 1], [3, 4, 2, 3], [5, 6, 4, 5], [7, 8, 6, 7]], - MultiIndex.from_tuples( - [("lev4", 1), ("lev4", 2), ("values", 1), ("values", 2)], - names=[None, "lev3"], - ), - MultiIndex.from_tuples( - [(1, 1), (1, 2), (2, 1), (2, 2)], names=["lev1", "lev2"] - ), - ), - ], -) -def test_pivot_list_like_index( - input_index, - input_columns, - input_values, - expected_values, - expected_columns, - expected_index, -): - # GH 21425, test when index is given a list - df = pd.DataFrame( - { - "lev1": [1, 1, 1, 1, 2, 2, 2, 2], - "lev2": [1, 1, 2, 2, 1, 1, 2, 2], - "lev3": [1, 2, 1, 2, 1, 2, 1, 2], - "lev4": [1, 2, 3, 4, 5, 6, 7, 8], - "values": [0, 1, 2, 3, 4, 5, 6, 7], - } - ) - - result = df.pivot(index=input_index, columns=input_columns, values=input_values) - expected = pd.DataFrame( - expected_values, columns=expected_columns, index=expected_index - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "input_index, input_columns, input_values, " - "expected_values, expected_columns, expected_index", - [ - ( - "lev4", - ["lev3"], - "values", - [ - [0.0, np.nan], - [np.nan, 1.0], - [2.0, np.nan], - [np.nan, 3.0], - [4.0, np.nan], - [np.nan, 5.0], - [6.0, np.nan], - [np.nan, 7.0], - ], - Index([1, 2], name="lev3"), - Index([1, 2, 3, 4, 5, 6, 7, 8], name="lev4"), - ), - ( - ["lev1", "lev2"], - ["lev3"], - "values", - [[0, 1], [2, 3], [4, 5], [6, 7]], - Index([1, 2], name="lev3"), - MultiIndex.from_tuples( - [(1, 1), (1, 2), (2, 1), (2, 2)], names=["lev1", "lev2"] - ), - ), - ( - ["lev1"], - ["lev2", "lev3"], - "values", - [[0, 1, 2, 3], [4, 5, 6, 7]], - MultiIndex.from_tuples( - [(1, 1), (1, 2), (2, 1), (2, 2)], names=["lev2", "lev3"] - ), - Index([1, 2], name="lev1"), - ), - ( - ["lev1", "lev2"], - ["lev3", "lev4"], - "values", - [ - [0.0, 1.0, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], - [np.nan, np.nan, 2.0, 3.0, np.nan, np.nan, np.nan, np.nan], - [np.nan, np.nan, np.nan, np.nan, 4.0, 5.0, np.nan, np.nan], - [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 6.0, 7.0], - ], - MultiIndex.from_tuples( - [(1, 1), (2, 2), (1, 3), (2, 4), (1, 5), (2, 6), (1, 7), (2, 8)], - names=["lev3", "lev4"], - ), - MultiIndex.from_tuples( - [(1, 1), (1, 2), (2, 1), (2, 2)], names=["lev1", "lev2"] - ), - ), - ], -) -def test_pivot_list_like_columns( - input_index, - input_columns, - input_values, - expected_values, - expected_columns, - expected_index, -): - # GH 21425, test when columns is given a list - df = pd.DataFrame( - { - "lev1": [1, 1, 1, 1, 2, 2, 2, 2], - "lev2": [1, 1, 2, 2, 1, 1, 2, 2], - "lev3": [1, 2, 1, 2, 1, 2, 1, 2], - "lev4": [1, 2, 3, 4, 5, 6, 7, 8], - "values": [0, 1, 2, 3, 4, 5, 6, 7], - } - ) - - result = df.pivot(index=input_index, columns=input_columns, values=input_values) - expected = pd.DataFrame( - expected_values, columns=expected_columns, index=expected_index - ) - tm.assert_frame_equal(result, expected) - - -def test_pivot_multiindexed_rows_and_cols(using_array_manager): - # GH 36360 - - df = pd.DataFrame( - data=np.arange(12).reshape(4, 3), - columns=MultiIndex.from_tuples( - [(0, 0), (0, 1), (0, 2)], names=["col_L0", "col_L1"] - ), - index=MultiIndex.from_tuples( - [(0, 0, 0), (0, 0, 1), (1, 1, 1), (1, 0, 0)], - names=["idx_L0", "idx_L1", "idx_L2"], - ), - ) - - res = df.pivot_table( - index=["idx_L0"], - columns=["idx_L1"], - values=[(0, 1)], - aggfunc=lambda col: col.values.sum(), - ) - - expected = pd.DataFrame( - data=[[5, np.nan], [10, 7.0]], - columns=MultiIndex.from_tuples( - [(0, 1, 0), (0, 1, 1)], names=["col_L0", "col_L1", "idx_L1"] - ), - index=Index([0, 1], dtype="int64", name="idx_L0"), - ) - if not using_array_manager: - # BlockManager does not preserve the dtypes - expected = expected.astype("float64") - - tm.assert_frame_equal(res, expected) - - -def test_pivot_df_multiindex_index_none(): - # GH 23955 - df = pd.DataFrame( - [ - ["A", "A1", "label1", 1], - ["A", "A2", "label2", 2], - ["B", "A1", "label1", 3], - ["B", "A2", "label2", 4], - ], - columns=["index_1", "index_2", "label", "value"], - ) - df = df.set_index(["index_1", "index_2"]) - - result = df.pivot(columns="label", values="value") - expected = pd.DataFrame( - [[1.0, np.nan], [np.nan, 2.0], [3.0, np.nan], [np.nan, 4.0]], - index=df.index, - columns=Index(["label1", "label2"], name="label"), - ) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/filesystem.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/filesystem.py deleted file mode 100644 index b7e6191abe6b4b10888071e959146e52519bf132..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/utils/filesystem.py +++ /dev/null @@ -1,182 +0,0 @@ -import fnmatch -import os -import os.path -import random -import shutil -import stat -import sys -from contextlib import contextmanager -from tempfile import NamedTemporaryFile -from typing import Any, BinaryIO, Iterator, List, Union, cast - -from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed - -from pip._internal.utils.compat import get_path_uid -from pip._internal.utils.misc import format_size - - -def check_path_owner(path: str) -> bool: - # If we don't have a way to check the effective uid of this process, then - # we'll just assume that we own the directory. - if sys.platform == "win32" or not hasattr(os, "geteuid"): - return True - - assert os.path.isabs(path) - - previous = None - while path != previous: - if os.path.lexists(path): - # Check if path is writable by current user. - if os.geteuid() == 0: - # Special handling for root user in order to handle properly - # cases where users use sudo without -H flag. - try: - path_uid = get_path_uid(path) - except OSError: - return False - return path_uid == 0 - else: - return os.access(path, os.W_OK) - else: - previous, path = path, os.path.dirname(path) - return False # assume we don't own the path - - -def copy2_fixed(src: str, dest: str) -> None: - """Wrap shutil.copy2() but map errors copying socket files to - SpecialFileError as expected. - - See also https://bugs.python.org/issue37700. - """ - try: - shutil.copy2(src, dest) - except OSError: - for f in [src, dest]: - try: - is_socket_file = is_socket(f) - except OSError: - # An error has already occurred. Another error here is not - # a problem and we can ignore it. - pass - else: - if is_socket_file: - raise shutil.SpecialFileError(f"`{f}` is a socket") - - raise - - -def is_socket(path: str) -> bool: - return stat.S_ISSOCK(os.lstat(path).st_mode) - - -@contextmanager -def adjacent_tmp_file(path: str, **kwargs: Any) -> Iterator[BinaryIO]: - """Return a file-like object pointing to a tmp file next to path. - - The file is created securely and is ensured to be written to disk - after the context reaches its end. - - kwargs will be passed to tempfile.NamedTemporaryFile to control - the way the temporary file will be opened. - """ - with NamedTemporaryFile( - delete=False, - dir=os.path.dirname(path), - prefix=os.path.basename(path), - suffix=".tmp", - **kwargs, - ) as f: - result = cast(BinaryIO, f) - try: - yield result - finally: - result.flush() - os.fsync(result.fileno()) - - -# Tenacity raises RetryError by default, explicitly raise the original exception -_replace_retry = retry(reraise=True, stop=stop_after_delay(1), wait=wait_fixed(0.25)) - -replace = _replace_retry(os.replace) - - -# test_writable_dir and _test_writable_dir_win are copied from Flit, -# with the author's agreement to also place them under pip's license. -def test_writable_dir(path: str) -> bool: - """Check if a directory is writable. - - Uses os.access() on POSIX, tries creating files on Windows. - """ - # If the directory doesn't exist, find the closest parent that does. - while not os.path.isdir(path): - parent = os.path.dirname(path) - if parent == path: - break # Should never get here, but infinite loops are bad - path = parent - - if os.name == "posix": - return os.access(path, os.W_OK) - - return _test_writable_dir_win(path) - - -def _test_writable_dir_win(path: str) -> bool: - # os.access doesn't work on Windows: http://bugs.python.org/issue2528 - # and we can't use tempfile: http://bugs.python.org/issue22107 - basename = "accesstest_deleteme_fishfingers_custard_" - alphabet = "abcdefghijklmnopqrstuvwxyz0123456789" - for _ in range(10): - name = basename + "".join(random.choice(alphabet) for _ in range(6)) - file = os.path.join(path, name) - try: - fd = os.open(file, os.O_RDWR | os.O_CREAT | os.O_EXCL) - except FileExistsError: - pass - except PermissionError: - # This could be because there's a directory with the same name. - # But it's highly unlikely there's a directory called that, - # so we'll assume it's because the parent dir is not writable. - # This could as well be because the parent dir is not readable, - # due to non-privileged user access. - return False - else: - os.close(fd) - os.unlink(file) - return True - - # This should never be reached - raise OSError("Unexpected condition testing for writable directory") - - -def find_files(path: str, pattern: str) -> List[str]: - """Returns a list of absolute paths of files beneath path, recursively, - with filenames which match the UNIX-style shell glob pattern.""" - result: List[str] = [] - for root, _, files in os.walk(path): - matches = fnmatch.filter(files, pattern) - result.extend(os.path.join(root, f) for f in matches) - return result - - -def file_size(path: str) -> Union[int, float]: - # If it's a symlink, return 0. - if os.path.islink(path): - return 0 - return os.path.getsize(path) - - -def format_file_size(path: str) -> str: - return format_size(file_size(path)) - - -def directory_size(path: str) -> Union[int, float]: - size = 0.0 - for root, _dirs, files in os.walk(path): - for filename in files: - file_path = os.path.join(root, filename) - size += file_size(file_path) - return size - - -def format_directory_size(path: str) -> str: - return format_size(directory_size(path)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/jupyter.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/jupyter.py deleted file mode 100644 index bedf5cb19a385c8b57c5d0e71a32da52f34a5e78..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/jupyter.py +++ /dev/null @@ -1,92 +0,0 @@ -from typing import Any, Dict, Iterable, List - -from . import get_console -from .segment import Segment -from .terminal_theme import DEFAULT_TERMINAL_THEME - -JUPYTER_HTML_FORMAT = """\ -
            {code}
            -""" - - -class JupyterRenderable: - """A shim to write html to Jupyter notebook.""" - - def __init__(self, html: str, text: str) -> None: - self.html = html - self.text = text - - def _repr_mimebundle_( - self, include: Iterable[str], exclude: Iterable[str], **kwargs: Any - ) -> Dict[str, str]: - data = {"text/plain": self.text, "text/html": self.html} - if include: - data = {k: v for (k, v) in data.items() if k in include} - if exclude: - data = {k: v for (k, v) in data.items() if k not in exclude} - return data - - -class JupyterMixin: - """Add to an Rich renderable to make it render in Jupyter notebook.""" - - __slots__ = () - - def _repr_mimebundle_( - self, include: Iterable[str], exclude: Iterable[str], **kwargs: Any - ) -> Dict[str, str]: - console = get_console() - segments = list(console.render(self, console.options)) # type: ignore - html = _render_segments(segments) - text = console._render_buffer(segments) - data = {"text/plain": text, "text/html": html} - if include: - data = {k: v for (k, v) in data.items() if k in include} - if exclude: - data = {k: v for (k, v) in data.items() if k not in exclude} - return data - - -def _render_segments(segments: Iterable[Segment]) -> str: - def escape(text: str) -> str: - """Escape html.""" - return text.replace("&", "&").replace("<", "<").replace(">", ">") - - fragments: List[str] = [] - append_fragment = fragments.append - theme = DEFAULT_TERMINAL_THEME - for text, style, control in Segment.simplify(segments): - if control: - continue - text = escape(text) - if style: - rule = style.get_html_style(theme) - text = f'{text}' if rule else text - if style.link: - text = f'{text}' - append_fragment(text) - - code = "".join(fragments) - html = JUPYTER_HTML_FORMAT.format(code=code) - - return html - - -def display(segments: Iterable[Segment], text: str) -> None: - """Render segments to Jupyter.""" - html = _render_segments(segments) - jupyter_renderable = JupyterRenderable(html, text) - try: - from IPython.display import display as ipython_display - - ipython_display(jupyter_renderable) - except ModuleNotFoundError: - # Handle the case where the Console has force_jupyter=True, - # but IPython is not installed. - pass - - -def print(*args: Any, **kwargs: Any) -> None: - """Proxy for Console print.""" - console = get_console() - return console.print(*args, **kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydub/silence.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydub/silence.py deleted file mode 100644 index 0ad149999d8fffc39d2bc1d0a3e2469daed0df78..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydub/silence.py +++ /dev/null @@ -1,182 +0,0 @@ -""" -Various functions for finding/manipulating silence in AudioSegments -""" -import itertools - -from .utils import db_to_float - - -def detect_silence(audio_segment, min_silence_len=1000, silence_thresh=-16, seek_step=1): - """ - Returns a list of all silent sections [start, end] in milliseconds of audio_segment. - Inverse of detect_nonsilent() - - audio_segment - the segment to find silence in - min_silence_len - the minimum length for any silent section - silence_thresh - the upper bound for how quiet is silent in dFBS - seek_step - step size for interating over the segment in ms - """ - seg_len = len(audio_segment) - - # you can't have a silent portion of a sound that is longer than the sound - if seg_len < min_silence_len: - return [] - - # convert silence threshold to a float value (so we can compare it to rms) - silence_thresh = db_to_float(silence_thresh) * audio_segment.max_possible_amplitude - - # find silence and add start and end indicies to the to_cut list - silence_starts = [] - - # check successive (1 sec by default) chunk of sound for silence - # try a chunk at every "seek step" (or every chunk for a seek step == 1) - last_slice_start = seg_len - min_silence_len - slice_starts = range(0, last_slice_start + 1, seek_step) - - # guarantee last_slice_start is included in the range - # to make sure the last portion of the audio is searched - if last_slice_start % seek_step: - slice_starts = itertools.chain(slice_starts, [last_slice_start]) - - for i in slice_starts: - audio_slice = audio_segment[i:i + min_silence_len] - if audio_slice.rms <= silence_thresh: - silence_starts.append(i) - - # short circuit when there is no silence - if not silence_starts: - return [] - - # combine the silence we detected into ranges (start ms - end ms) - silent_ranges = [] - - prev_i = silence_starts.pop(0) - current_range_start = prev_i - - for silence_start_i in silence_starts: - continuous = (silence_start_i == prev_i + seek_step) - - # sometimes two small blips are enough for one particular slice to be - # non-silent, despite the silence all running together. Just combine - # the two overlapping silent ranges. - silence_has_gap = silence_start_i > (prev_i + min_silence_len) - - if not continuous and silence_has_gap: - silent_ranges.append([current_range_start, - prev_i + min_silence_len]) - current_range_start = silence_start_i - prev_i = silence_start_i - - silent_ranges.append([current_range_start, - prev_i + min_silence_len]) - - return silent_ranges - - -def detect_nonsilent(audio_segment, min_silence_len=1000, silence_thresh=-16, seek_step=1): - """ - Returns a list of all nonsilent sections [start, end] in milliseconds of audio_segment. - Inverse of detect_silent() - - audio_segment - the segment to find silence in - min_silence_len - the minimum length for any silent section - silence_thresh - the upper bound for how quiet is silent in dFBS - seek_step - step size for interating over the segment in ms - """ - silent_ranges = detect_silence(audio_segment, min_silence_len, silence_thresh, seek_step) - len_seg = len(audio_segment) - - # if there is no silence, the whole thing is nonsilent - if not silent_ranges: - return [[0, len_seg]] - - # short circuit when the whole audio segment is silent - if silent_ranges[0][0] == 0 and silent_ranges[0][1] == len_seg: - return [] - - prev_end_i = 0 - nonsilent_ranges = [] - for start_i, end_i in silent_ranges: - nonsilent_ranges.append([prev_end_i, start_i]) - prev_end_i = end_i - - if end_i != len_seg: - nonsilent_ranges.append([prev_end_i, len_seg]) - - if nonsilent_ranges[0] == [0, 0]: - nonsilent_ranges.pop(0) - - return nonsilent_ranges - - -def split_on_silence(audio_segment, min_silence_len=1000, silence_thresh=-16, keep_silence=100, - seek_step=1): - """ - Returns list of audio segments from splitting audio_segment on silent sections - - audio_segment - original pydub.AudioSegment() object - - min_silence_len - (in ms) minimum length of a silence to be used for - a split. default: 1000ms - - silence_thresh - (in dBFS) anything quieter than this will be - considered silence. default: -16dBFS - - keep_silence - (in ms or True/False) leave some silence at the beginning - and end of the chunks. Keeps the sound from sounding like it - is abruptly cut off. - When the length of the silence is less than the keep_silence duration - it is split evenly between the preceding and following non-silent - segments. - If True is specified, all the silence is kept, if False none is kept. - default: 100ms - - seek_step - step size for interating over the segment in ms - """ - - # from the itertools documentation - def pairwise(iterable): - "s -> (s0,s1), (s1,s2), (s2, s3), ..." - a, b = itertools.tee(iterable) - next(b, None) - return zip(a, b) - - if isinstance(keep_silence, bool): - keep_silence = len(audio_segment) if keep_silence else 0 - - output_ranges = [ - [ start - keep_silence, end + keep_silence ] - for (start,end) - in detect_nonsilent(audio_segment, min_silence_len, silence_thresh, seek_step) - ] - - for range_i, range_ii in pairwise(output_ranges): - last_end = range_i[1] - next_start = range_ii[0] - if next_start < last_end: - range_i[1] = (last_end+next_start)//2 - range_ii[0] = range_i[1] - - return [ - audio_segment[ max(start,0) : min(end,len(audio_segment)) ] - for start,end in output_ranges - ] - - -def detect_leading_silence(sound, silence_threshold=-50.0, chunk_size=10): - """ - Returns the millisecond/index that the leading silence ends. - - audio_segment - the segment to find silence in - silence_threshold - the upper bound for how quiet is silent in dFBS - chunk_size - chunk size for interating over the segment in ms - """ - trim_ms = 0 # ms - assert chunk_size > 0 # to avoid infinite loop - while sound[trim_ms:trim_ms+chunk_size].dBFS < silence_threshold and trim_ms < len(sound): - trim_ms += chunk_size - - # if there is no end it should return the length of the segment - return min(trim_ms, len(sound)) - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/snobol.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/snobol.py deleted file mode 100644 index 28087de2442cfc2a01fa2979513f634e701e07cb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/snobol.py +++ /dev/null @@ -1,82 +0,0 @@ -""" - pygments.lexers.snobol - ~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for the SNOBOL language. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, bygroups -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation - -__all__ = ['SnobolLexer'] - - -class SnobolLexer(RegexLexer): - """ - Lexer for the SNOBOL4 programming language. - - Recognizes the common ASCII equivalents of the original SNOBOL4 operators. - Does not require spaces around binary operators. - - .. versionadded:: 1.5 - """ - - name = "Snobol" - aliases = ["snobol"] - filenames = ['*.snobol'] - mimetypes = ['text/x-snobol'] - - tokens = { - # root state, start of line - # comments, continuation lines, and directives start in column 1 - # as do labels - 'root': [ - (r'\*.*\n', Comment), - (r'[+.] ', Punctuation, 'statement'), - (r'-.*\n', Comment), - (r'END\s*\n', Name.Label, 'heredoc'), - (r'[A-Za-z$][\w$]*', Name.Label, 'statement'), - (r'\s+', Text, 'statement'), - ], - # statement state, line after continuation or label - 'statement': [ - (r'\s*\n', Text, '#pop'), - (r'\s+', Text), - (r'(?<=[^\w.])(LT|LE|EQ|NE|GE|GT|INTEGER|IDENT|DIFFER|LGT|SIZE|' - r'REPLACE|TRIM|DUPL|REMDR|DATE|TIME|EVAL|APPLY|OPSYN|LOAD|UNLOAD|' - r'LEN|SPAN|BREAK|ANY|NOTANY|TAB|RTAB|REM|POS|RPOS|FAIL|FENCE|' - r'ABORT|ARB|ARBNO|BAL|SUCCEED|INPUT|OUTPUT|TERMINAL)(?=[^\w.])', - Name.Builtin), - (r'[A-Za-z][\w.]*', Name), - # ASCII equivalents of original operators - # | for the EBCDIC equivalent, ! likewise - # \ for EBCDIC negation - (r'\*\*|[?$.!%*/#+\-@|&\\=]', Operator), - (r'"[^"]*"', String), - (r"'[^']*'", String), - # Accept SPITBOL syntax for real numbers - # as well as Macro SNOBOL4 - (r'[0-9]+(?=[^.EeDd])', Number.Integer), - (r'[0-9]+(\.[0-9]*)?([EDed][-+]?[0-9]+)?', Number.Float), - # Goto - (r':', Punctuation, 'goto'), - (r'[()<>,;]', Punctuation), - ], - # Goto block - 'goto': [ - (r'\s*\n', Text, "#pop:2"), - (r'\s+', Text), - (r'F|S', Keyword), - (r'(\()([A-Za-z][\w.]*)(\))', - bygroups(Punctuation, Name.Label, Punctuation)) - ], - # everything after the END statement is basically one - # big heredoc. - 'heredoc': [ - (r'.*\n', String.Heredoc) - ] - } diff --git a/spaces/projecte-aina/transcripcio-fonetica-catala/festival.py b/spaces/projecte-aina/transcripcio-fonetica-catala/festival.py deleted file mode 100644 index a7f407d6dcb8ce439768e8c12f6f31964cb63b7b..0000000000000000000000000000000000000000 --- a/spaces/projecte-aina/transcripcio-fonetica-catala/festival.py +++ /dev/null @@ -1,65 +0,0 @@ -#!/usr/bin/env python -# -*- encoding: utf-8 -*- -# -# Copyright (c) 2016 Jordi Mas i Hernandez -# -# This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this program; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. - -import subprocess -import tempfile - -festival_voices = { - "ona": "voice_upc_ca_ona_hts", - "pau": "voice_upc_ca_pau_hts" -} - -def _normalize(result): - mapping = { - '’' : '\'', - 'à' : 'à', - 'í' : 'í', - 'ó' : 'ó', - 'è' : 'è', - 'ò' : 'ò', - 'ú' : 'ú', - } - - for char in mapping.keys(): - result = result.replace(char, mapping[char]) - - return result - - -def festival_synthesize(text, voice): - if voice not in ["ona", "pau"]: - raise Error - - txt2wave = '/usr/bin/text2wave' - - with tempfile.NamedTemporaryFile() as encoded_file,\ - tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as wave_file: - - text = _normalize(text) - f = open(encoded_file.name, 'wb') - f.write(text.encode('ISO-8859-15', 'ignore')) - f.close() - - cmd = '{0} -o {1} {2} -eval "({3})"'.\ - format(txt2wave, wave_file.name, encoded_file.name, festival_voices[voice]) - p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) - p.wait() - - return wave_file.name diff --git a/spaces/qprinceqq/noise-greeter-demo/app.py b/spaces/qprinceqq/noise-greeter-demo/app.py deleted file mode 100644 index 933a337cf6bd0245b37d42ed4b433aef891405d0..0000000000000000000000000000000000000000 --- a/spaces/qprinceqq/noise-greeter-demo/app.py +++ /dev/null @@ -1,274 +0,0 @@ -import numpy as np -import cv2 -import gradio as gr -import albumentations as A - - -class I: - def __init__(self): - pass - - class Noises: - def __init__(self): - pass - - def ImpulseGray(self, shape, a, b, c): - if len(shape) > 2: - img = cv2.cvtColor(self, cv2.COLOR_RGB2GRAY) - v, h = shape[0], shape[1] - size = v * h - nop = int(np.random.randint(size * a * 0.75, size * a * 0.875)) - # y_list = [abs(int(np.random.normal(loc=int(np.random.randint(0, int(v/b)+1))*b, scale=b/4))) for i in range(nop)] - # x_list = [abs(int(np.random.normal(loc=int(np.random.randint(0, int(h/c)+1))*c, scale=c/4))) for i in range(nop)] - for i in range(nop): - x = abs(int(np.random.normal(loc=int(np.random.randint(0, int(h / c) + 1)) * c, scale=c / 4))) - y = abs(int(np.random.normal(loc=int(np.random.randint(0, int(v / b) + 1)) * b, scale=b / 4))) - if y >= v or x >= h: - continue - if np.random.randint(0, 2) == 1: - img[y][x] = 255 - else: - img[y][x] = 0 - return img - - def ImpulseRGB(img, shape, a, b, c, f): - v, h = shape[0], shape[1] - size = v * h - f = 51 * ((f + 4) / 4) - nop = int(np.random.randint(size * a * 0.75, size * a * 0.875)) - # y_list = [abs(int(np.random.normal(loc=int(np.random.randint(0, int(v/b)+1))*b, scale=b/4))) for i in range(nop)] - # x_list = [abs(int(np.random.normal(loc=int(np.random.randint(0, int(h/c)+1))*c, scale=c/4))) for i in range(nop)] - for i in range(nop): - x = abs(int(np.random.normal(loc=int(np.random.randint(0, int(h / c) + 1)) * c, scale=c / 4))) - y = abs(int(np.random.normal(loc=int(np.random.randint(0, int(v / b) + 1)) * b, scale=b / 4))) - if y >= v or x >= h: - continue - if np.random.randint(0, 2) == 1: - if img[y][x][0] + f > 255: - img[y][x][0] = 255 - else: - img[y][x][0] += f - if img[y][x][1] + f > 255: - img[y][x][1] = 255 - else: - img[y][x][1] += f - if img[y][x][2] + f > 255: - img[y][x][2] = 255 - else: - img[y][x][2] += f - else: - if img[y][x][0] - f < 0: - img[y][x][0] = 0 - else: - img[y][x][0] -= f - if img[y][x][1] - f < 0: - img[y][x][1] = 0 - else: - img[y][x][1] -= f - if img[y][x][2] - f < 0: - img[y][x][2] = 0 - else: - img[y][x][2] -= f - return img - - def Red(img, shape, a, b, c): - v, h = shape[0], shape[1] - size = v * h - nop = int(np.random.randint(size * a * 0.75, size * a * 0.875)) - x_list = np.random.randint(0, h, size=nop) - y_list = np.random.randint(0, v, size=nop) - for i in range(nop): - x = x_list[i] - y = y_list[i] - if c * 100 > np.random.randint(0, 101): - img[y][x][0] = 255 - img[y][x][1] = 0 - img[y][x][2] = 0 - var = abs(int(np.random.normal(loc=b, scale=b / 2))) - if img[y][x][0] + var > 255: - img[y][x][0] = 255 - else: - img[y][x][0] += var - return img - - def Green(img, shape, a, b, c): - v, h = shape[0], shape[1] - size = v * h - nop = int(np.random.randint(size * a * 0.75, size * a * 0.875)) - x_list = np.random.randint(0, h, size=nop) - y_list = np.random.randint(0, v, size=nop) - for i in range(nop): - x = x_list[i] - y = y_list[i] - if c * 100 > np.random.randint(0, 101): - img[y][x][0] = 0 - img[y][x][1] = 255 - img[y][x][2] = 0 - - var = abs(int(np.random.normal(loc=b, scale=b / 2))) - if img[y][x][1] + var > 255: - img[y][x][1] = 255 - else: - img[y][x][1] += var - return img - - def Blue(img, shape, a, b, c): - v, h = shape[0], shape[1] - size = v * h - nop = int(np.random.randint(size * a * 0.75, size * a * 0.875)) - x_list = np.random.randint(0, h, size=nop) - y_list = np.random.randint(0, v, size=nop) - for i in range(nop): - x = x_list[i] - y = y_list[i] - if c * 100 > np.random.randint(0, 101): - img[y][x][0] = 0 - img[y][x][1] = 0 - img[y][x][2] = 255 - var = abs(int(np.random.normal(loc=b, scale=b))) - if img[y][x][2] + var > 255: - img[y][x][2] = 255 - else: - img[y][x][2] += var - return img - - class Filters: - def BlackAW(img, vv1, b): - if len(img.shape) > 2: - img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - y, x = img.shape - for i in range(y): - for j in range(x): - if not vv1: - if img[i][j] >= int(255 * b): - img[i][j] = 255 - else: - img[i][j] = 0 - else: - if img[i][j] >= int(255 * b): - img[i][j] = 0 - else: - img[i][j] = 255 - return img - - def Blur(img, b, c): - if b % 2 == 0: b += 1 - if c % 2 == 0: c += 1 - img = cv2.GaussianBlur(img, (b, c), 3) - return img - - def Compress(img, b): - transform = A.JpegCompression(quality_lower=b, quality_upper=b, p=1) - img = transform(image=img) - return img['image'] - - def Perspective(img, b): - transform = A.Perspective(scale=(b, b), keep_size=True, fit_output=True, p=1) - img = transform(image=img) - return img['image'] - - -def final_greet(img, c1, c2, sl2, sl3, sl4, - c11, sl22, sl33, sl44, - c111, sl222, sl333, sl444, - c1111, sl2222, sl3333, sl4444, f, - v1, v2, v3, v4, vf11, - vf21, vf22, vf31, vf41, vv1): - shape = np.array(img).shape - - if c11 and sl22 != 0: - img = I.Noises.Red(img, shape, sl22, sl33, sl44) - if c111 and sl222 != 0: - img = I.Noises.Green(img, shape, sl222, sl333, sl444) - if c1111 and sl2222 != 0: - img = I.Noises.Blue(img, shape, sl2222, sl3333, sl4444) - if c1 and sl2 != 0 and not c2: - img = I.Noises.ImpulseGray(img, shape, sl2, sl3, sl4) - elif c1 and sl2 == 0 and not c2: - img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - elif c1 and sl2 != 0: - img = I.Noises.ImpulseRGB(img, shape, sl2, sl3, sl4, f) - - if v2: - img = I.Filters.Blur(img, vf21, vf22) - if v3: - img = I.Filters.Compress(img, vf31) - - if v1: - img = I.Filters.BlackAW(img, vv1, vf11) - - if v4: - img = I.Filters.Perspective(img, vf41) - - return img - - -theme = gr.themes.Default(primary_hue="blue", secondary_hue="blue", neutral_hue="stone").set( - slider_color="#4040A0", - slider_color_dark="*secondary_600", - block_title_text_weight="600", - block_border_width="2px", - block_shadow="*shadow_drop_md", - button_shadow="*shadow_drop_md", - button_large_padding="16px" -) - -with gr.Blocks(title='A Noise Greeter', theme=theme) as proj: - with gr.Group(): - with gr.Row(): - c11 = gr.Checkbox(label="Красный шум", info="Цветной шум") - sl22 = gr.Slider(0, 1, step=0.05, label="Количество шума") - sl33 = gr.Slider(0, 20, step=2, label="Сила шума") - sl44 = gr.Slider(0, 1, step=0.02, label="Шанс закраски пикселя") - with gr.Row(): - c111 = gr.Checkbox(label="Зелёный шум", info="Цветной шум") - sl222 = gr.Slider(0, 1, step=0.05, label="Количество шума") - sl333 = gr.Slider(0, 20, step=2, label="Сила шума") - sl444 = gr.Slider(0, 1, step=0.02, label="Шанс закраски пикселя") - with gr.Row(): - c1111 = gr.Checkbox(label="Синий шум", info="Цветной шум") - sl2222 = gr.Slider(0, 1, step=0.05, label="Количество шума") - sl3333 = gr.Slider(0, 20, step=2, label="Сила шума") - sl4444 = gr.Slider(0, 1, step=0.02, label="Шанс закраски пикселя") - with gr.Row(): - c1 = gr.Checkbox(label="Импульсный шум", info="Ч/б шум") - c2 = gr.Checkbox(label="В цвете", info="Режим") - sl2 = gr.Slider(0, 1, step=0.05, label="Количество шума") - f = gr.Slider(-4, 4, step=1, value=0, label="Сила шума (только в цвете)") - sl3 = gr.Slider(2, 20, step=2, label="Вертикаль шума") - sl4 = gr.Slider(2, 20, step=2, label="Горизонталь шума") - with gr.Group(): - with gr.Row(): - v2 = gr.Checkbox(label="Размытие изображения", info="Цветной фильтр") - vf21 = gr.Slider(1, 51, step=2, label="Икс") - vf22 = gr.Slider(1, 51, step=2, label="Игрик") - with gr.Row(): - v3 = gr.Checkbox(label="Сжатие изображения", info="Цветной фильтр") - vf31 = gr.Slider(0, 100, step=1, label="Качество картинки") - with gr.Row(): - v1 = gr.Checkbox(label="Чёрно-белый фильтр", info="Ч/б фильтр") - vv1 = gr.Checkbox(label="Инверсия", info="Режим") - vf11 = gr.Slider(0, 1, step=0.05, value=0.5, label="Трешхолд (пороговое значение)") - - with gr.Row(): - v4 = gr.Checkbox(label="Изменение перспективы изображения", info="Цветной фильтр") - vf41 = gr.Slider(0, 1, step=0.05, value=0, label="Скейл") - - with gr.Row(): - with gr.Column(): - in_image = gr.Image(label="Исходная картинка") - greet_btn = gr.Button("Обработать") - out_image = gr.Image(label="Обработанная картинка") - - greet_btn.click(fn=final_greet, inputs=[ - in_image, - c1, c2, sl2, sl3, sl4, - c11, sl22, sl33, sl44, - c111, sl222, sl333, sl444, - c1111, sl2222, sl3333, sl4444, f, - v1, v2, v3, v4, - vf11, vf21, vf22, - vf31, vf41, vv1 - ], outputs=out_image) - -proj.launch(share=False) diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Among Us Download Pc Games 88.md b/spaces/quidiaMuxgu/Expedit-SAM/Among Us Download Pc Games 88.md deleted file mode 100644 index fe0ac18c5140293c0a76a8df91b781171d2a880f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Among Us Download Pc Games 88.md +++ /dev/null @@ -1,110 +0,0 @@ - -

            Among Us Download Pc Games 88: A Guide

            -

            Among Us is one of the most popular online multiplayer games of 2020 and 2021. The game is a social deduction game where you have to work together with other players to repair a spaceship or find the impostor who is trying to sabotage it and kill everyone. The game is available for Android, iOS, PC, and console platforms, and you can play it online or via local WiFi with up to 15 players. In this article, we will show you how to download and play Among Us on PC using a website called Pc Games 88.

            -

            What is Pc Games 88?

            -

            Pc Games 88 is a website that offers free downloads of various PC games, including Among Us. The website claims to provide high-quality games that are virus-free and easy to install. The website also provides screenshots, videos, system requirements, and reviews of the games that it offers. However, we cannot guarantee the safety or legality of the website or the games that it provides. Therefore, we advise you to use caution and discretion when downloading and playing games from Pc Games 88.

            -

            Among Us Download Pc Games 88


            Download File ->>->>->> https://geags.com/2uCsMD



            -

            How to Download and Play Among Us from Pc Games 88?

            -

            To download and play Among Us from Pc Games 88, you need to follow these steps:

            -
              -
            1. Go to this link and click on the download button.
            2. -
            3. Wait for the download to finish and locate the file on your computer.
            4. -
            5. Extract the file using a software like WinRAR or 7-Zip.
            6. -
            7. Open the extracted folder and run the setup.exe file.
            8. -
            9. Follow the instructions on the screen to install the game.
            10. -
            11. Launch the game from your desktop or start menu.
            12. -
            13. Enjoy playing Among Us on your PC!
            14. -
            -

            What are the Features of Among Us from Pc Games 88?

            -

            Among Us from Pc Games 88 has many features that make it a fun and exciting game to play. Some of these features are:

            -
              -
            • The game has four different maps to choose from: The Skeld, MIRA HQ, Polus, and The Airship.
            • -
            • The game allows you to customize your character with different colors, hats, skins, and pets.
            • -
            • The game has different game modes and options to adjust the number of impostors, tasks, roles, speed, vision, voting time, and more.
            • -
            • The game has a chat system that lets you communicate with other players during meetings or emergencies.
            • -
            • The game has cross-platform compatibility that lets you play with other players on Android, iOS, PC, or console devices.
            • -
            -

            What are the Pros and Cons of Among Us from Pc Games 88?

            -

            Like any other game, Among Us from Pc Games 88 has its pros and cons. Here are some of them:

            - - - - - - -
            ProsCons
            The game is free to download and play.The game may not be safe or legal to download and play.
            The game is fun and addictive to play with friends or strangers.The game can be frustrating or boring to play with toxic or cheating players.
            The game is simple and easy to learn and play.The game can be repetitive and predictable after a while.
            The game is humorous and creative in its design and gameplay.The game can be glitchy or laggy at times.
            - -

            Conclusion

            - -

            Among Us Download Pc Games 88 is a guide that shows you how to download and play Among Us on PC using a website called Pc Games 88. The website offers free downloads of various PC games, including Among Us. The game is a social deduction game where you have to work together with other players to repair a spaceship or find the impostor who is trying to sabotage it and kill everyone. The game has many features, such as different maps, customization options, game modes, chat system, and cross-platform compatibility. The game also has some pros and cons that you should consider before downloading and playing it. We hope this guide was helpful and informative for you. Thank you for reading!

            -

            FAQs about Among Us Download Pc Games 88

            -

            In this section, we will answer some of the frequently asked questions about Among Us Download Pc Games 88. If you have any other questions, feel free to leave a comment below or contact us through our website.

            -

            Is Among Us Download Pc Games 88 safe to download and play?

            -

            Among Us Download Pc Games 88 is not safe to download and play in some countries or regions where the game is copyrighted or protected by intellectual property laws. Therefore, we advise you to check the laws and regulations of your country or region before downloading and playing the game. We do not condone or support any illegal or unethical activities related to the game.

            -

            -

            Moreover, we cannot guarantee that the website or the game that it provides is free from viruses, malware, or other harmful elements. Therefore, we advise you to scan the file with an antivirus software before opening it and to backup your data before playing the game.

            -

            Is Among Us Download Pc Games 88 compatible with my PC?

            -

            Among Us Download Pc Games 88 is compatible with most PCs that meet the minimum system requirements for the game. These are:

            -
              -
            • Operating system: Windows XP/Vista/7/8/10
            • -
            • Processor: Intel Pentium 4 or AMD Athlon XP
            • -
            • Memory: 512 MB RAM
            • -
            • Graphics: NVIDIA GeForce FX or ATI Radeon 9500
            • -
            • Storage: 4 GB available space
            • -
            • Sound card: DirectX compatible
            • -
            -

            If your PC does not meet these requirements, you may experience some problems or errors while playing the game. You may also need to adjust some settings or install some drivers or patches to make the game run smoothly.

            -

            How can I improve the performance of Among Us Download Pc Games 88 on my PC?

            -

            If you want to improve the performance of Among Us Download Pc Games 88 on your PC, you can try some of these tips:

            -
              -
            • Close any unnecessary programs or applications that are running in the background.
            • -
            • Update your graphics card and sound card drivers.
            • -
            • Lower the resolution and graphics quality of the game.
            • -
            • Disable any anti-aliasing or filtering options in the game.
            • -
            • Enable the frame limiter option in the game.
            • -
            • Clean your PC from any dust or dirt that may affect its cooling system.
            • -
            - -

            Final Words

            - -

            We hope you enjoyed reading this article about Among Us Download Pc Games 88. We hope we answered all your questions and provided you with useful information and tips. If you liked this article, please share it with your friends and family who are also fans of Among Us. If you have any feedback or suggestions for us, please let us know in the comments section below. Thank you for your time and attention!

            -

            What are the Reviews of Among Us Download Pc Games 88?

            -

            If you are curious about what other people think of Among Us Download Pc Games 88, you can read some of the reviews that we found online. Here are some of them:

            -
            -

            "Among Us Download Pc Games 88 is a great way to play Among Us on PC for free. The game is fun and addictive, and I love playing it with my friends online. The game has a lot of features and options that make it more interesting and challenging. The game works well on my PC, and I did not encounter any major issues or problems. I recommend this game to anyone who likes social deduction games and wants to have some fun with their friends." - Mark Lee

            -
            -
            -

            "Among Us Download Pc Games 88 is a terrible way to play Among Us on PC. The game is boring and repetitive, and I hate playing it with strangers online. The game has a lot of flaws and bugs that make it frustrating and annoying. The game does not work well on my PC, and I had to deal with many crashes and errors. I do not recommend this game to anyone who respects themselves and their PC." - Lisa Kim

            -
            -
            -

            "Among Us Download Pc Games 88 is an okay way to play Among Us on PC. The game is fun and entertaining, but it can also be tedious and predictable after a while. The game has some features and options that make it more varied and enjoyable, but it also has some glitches and lag that make it less smooth and reliable. The game works fine on my PC, but I had to adjust some settings to make it run better. I think this game is good for casual players who want to try Among Us on PC, but not for serious gamers who expect high-quality graphics, sound, and gameplay." - Alex Smith

            -
            - -

            Conclusion

            - -

            Among Us Download Pc Games 88 is a guide that shows you how to download and play Among Us on PC using a website called Pc Games 88. The website offers free downloads of various PC games, including Among Us. The game is a social deduction game where you have to work together with other players to repair a spaceship or find the impostor who is trying to sabotage it and kill everyone. The game has many features, such as different maps, customization options, game modes, chat system, and cross-platform compatibility. The game also has some pros and cons that you should consider before downloading and playing it. We hope this guide was helpful and informative for you. Thank you for reading!

            -

            How to Uninstall Among Us Download Pc Games 88?

            -

            If you want to uninstall Among Us Download Pc Games 88 from your PC, you can follow these steps:

            -
              -
            1. Go to the folder where you installed the game and delete it.
            2. -
            3. Go to the Control Panel and click on Add or Remove Programs.
            4. -
            5. Find and select the setup.exe file that you used to install the game and click on Remove.
            6. -
            7. Follow the instructions on the screen to complete the uninstallation process.
            8. -
            9. Restart your PC to apply the changes.
            10. -
            -

            Note: This will only remove the game from your PC. It will not affect any other files or programs that you have on your PC.

            -

            Where to Find More Games Like Among Us Download Pc Games 88?

            -

            If you enjoyed playing Among Us Download Pc Games 88, you might be interested in finding more games like it. Here are some suggestions for you:

            -
              -
            • Among Us: This is the official version of the game that you can buy and play on Steam or Epic Games Store. The game has more features, updates, and support than the free version from Pc Games 88. The game also has a dedicated community and fan base that you can join and interact with.
            • -
            • Town of Salem: This is another social deduction game that you can play online with up to 15 players. The game is based on the classic party game Mafia, where you have to find and eliminate the evil roles among the town members. The game has different modes, roles, maps, and customization options that make it more diverse and challenging.
            • -
            • Secret Neighbor: This is a multiplayer horror game that you can play online with up to 6 players. The game is based on the popular horror game Hello Neighbor, where you have to sneak into your neighbor's house and find out his secrets. The game has a twist: one of your friends is secretly working with the neighbor and can change his appearance at any time. The game has different characters, abilities, items, and maps that make it more thrilling and scary.
            • -
            - -

            Final Words

            - -

            We hope you enjoyed reading this article about Among Us Download Pc Games 88. We hope we answered all your questions and provided you with useful information and tips. If you liked this article, please share it with your friends and family who are also fans of Among Us. If you have any feedback or suggestions for us, please let us know in the comments section below. Thank you for your time and attention!

            -

            Conclusion

            -

            Among Us Download Pc Games 88 is a guide that shows you how to download and play Among Us on PC using a website called Pc Games 88. The website offers free downloads of various PC games, including Among Us. The game is a social deduction game where you have to work together with other players to repair a spaceship or find the impostor who is trying to sabotage it and kill everyone. The game has many features, such as different maps, customization options, game modes, chat system, and cross-platform compatibility. The game also has some pros and cons that you should consider before downloading and playing it. We hope this guide was helpful and informative for you. If you want to play more games like Among Us, you can check out our suggestions in this article. Thank you for reading and have a great day!

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dynomite Deluxe Offline Activation Keygen.md b/spaces/quidiaMuxgu/Expedit-SAM/Dynomite Deluxe Offline Activation Keygen.md deleted file mode 100644 index 3f15f2180ebf3b1cab009121c838bc5e999be298..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dynomite Deluxe Offline Activation Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Dynomite Deluxe offline activation keygen


            Download ✒ ✒ ✒ https://geags.com/2uCq4u



            - -If you are new to Sony Vegas you will need to read the manual, the program can get a ... Dynomite Deluxe Offline Activation Keygen amarjwet. 1fdad05405
            -
            -
            -

            diff --git a/spaces/r3gm/RVC_HF/infer/modules/uvr5/mdxnet.py b/spaces/r3gm/RVC_HF/infer/modules/uvr5/mdxnet.py deleted file mode 100644 index 86a066893ad99cfed77788027a9deb8ed486a7f2..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/infer/modules/uvr5/mdxnet.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch -from tqdm import tqdm - -cpu = torch.device("cpu") - - -class ConvTDFNetTrim: - def __init__( - self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024 - ): - super(ConvTDFNetTrim, self).__init__() - - self.dim_f = dim_f - self.dim_t = 2**dim_t - self.n_fft = n_fft - self.hop = hop - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to( - device - ) - self.target_name = target_name - self.blender = "blender" in model_name - - self.dim_c = 4 - out_c = self.dim_c * 4 if target_name == "*" else self.dim_c - self.freq_pad = torch.zeros( - [1, out_c, self.n_bins - self.dim_f, self.dim_t] - ).to(device) - - self.n = L // 2 - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop, - window=self.window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape( - [-1, self.dim_c, self.n_bins, self.dim_t] - ) - return x[:, :, : self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = ( - self.freq_pad.repeat([x.shape[0], 1, 1, 1]) - if freq_pad is None - else freq_pad - ) - x = torch.cat([x, freq_pad], -2) - c = 4 * 2 if self.target_name == "*" else 2 - x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape( - [-1, 2, self.n_bins, self.dim_t] - ) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft( - x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True - ) - return x.reshape([-1, c, self.chunk_size]) - - -def get_models(device, dim_f, dim_t, n_fft): - return ConvTDFNetTrim( - device=device, - model_name="Conv-TDF", - target_name="vocals", - L=11, - dim_f=dim_f, - dim_t=dim_t, - n_fft=n_fft, - ) - - -class Predictor: - def __init__(self, args): - import onnxruntime as ort - - logger.info(ort.get_available_providers()) - self.args = args - self.model_ = get_models( - device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft - ) - self.model = ort.InferenceSession( - os.path.join(args.onnx, self.model_.target_name + ".onnx"), - providers=[ - "CUDAExecutionProvider", - "DmlExecutionProvider", - "CPUExecutionProvider", - ], - ) - logger.info("ONNX load done") - - def demix(self, mix): - samples = mix.shape[-1] - margin = self.args.margin - chunk_size = self.args.chunks * 44100 - assert not margin == 0, "margin cannot be zero!" - if margin > chunk_size: - margin = chunk_size - - segmented_mix = {} - - if self.args.chunks == 0 or samples < chunk_size: - chunk_size = samples - - counter = -1 - for skip in range(0, samples, chunk_size): - counter += 1 - - s_margin = 0 if counter == 0 else margin - end = min(skip + chunk_size + margin, samples) - - start = skip - s_margin - - segmented_mix[skip] = mix[:, start:end].copy() - if end == samples: - break - - sources = self.demix_base(segmented_mix, margin_size=margin) - """ - mix:(2,big_sample) - segmented_mix:offset->(2,small_sample) - sources:(1,2,big_sample) - """ - return sources - - def demix_base(self, mixes, margin_size): - chunked_sources = [] - progress_bar = tqdm(total=len(mixes)) - progress_bar.set_description("Processing") - for mix in mixes: - cmix = mixes[mix] - sources = [] - n_sample = cmix.shape[1] - model = self.model_ - trim = model.n_fft // 2 - gen_size = model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - mix_p = np.concatenate( - (np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1 - ) - mix_waves = [] - i = 0 - while i < n_sample + pad: - waves = np.array(mix_p[:, i : i + model.chunk_size]) - mix_waves.append(waves) - i += gen_size - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu) - with torch.no_grad(): - _ort = self.model - spek = model.stft(mix_waves) - if self.args.denoise: - spec_pred = ( - -_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5 - + _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5 - ) - tar_waves = model.istft(torch.tensor(spec_pred)) - else: - tar_waves = model.istft( - torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0]) - ) - tar_signal = ( - tar_waves[:, :, trim:-trim] - .transpose(0, 1) - .reshape(2, -1) - .numpy()[:, :-pad] - ) - - start = 0 if mix == 0 else margin_size - end = None if mix == list(mixes.keys())[::-1][0] else -margin_size - if margin_size == 0: - end = None - sources.append(tar_signal[:, start:end]) - - progress_bar.update(1) - - chunked_sources.append(sources) - _sources = np.concatenate(chunked_sources, axis=-1) - # del self.model - progress_bar.close() - return _sources - - def prediction(self, m, vocal_root, others_root, format): - os.makedirs(vocal_root, exist_ok=True) - os.makedirs(others_root, exist_ok=True) - basename = os.path.basename(m) - mix, rate = librosa.load(m, mono=False, sr=44100) - if mix.ndim == 1: - mix = np.asfortranarray([mix, mix]) - mix = mix.T - sources = self.demix(mix.T) - opt = sources[0].T - if format in ["wav", "flac"]: - sf.write( - "%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate - ) - sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate) - else: - path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename) - path_other = "%s/%s_others.wav" % (others_root, basename) - sf.write(path_vocal, mix - opt, rate) - sf.write(path_other, opt, rate) - if os.path.exists(path_vocal): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_vocal, path_vocal[:-4] + ".%s" % format) - ) - if os.path.exists(path_other): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path_other, path_other[:-4] + ".%s" % format) - ) - - -class MDXNetDereverb: - def __init__(self, chunks, device): - self.onnx = "assets/uvr5_weights/onnx_dereverb_By_FoxJoy" - self.shifts = 10 # 'Predict with randomised equivariant stabilisation' - self.mixing = "min_mag" # ['default','min_mag','max_mag'] - self.chunks = chunks - self.margin = 44100 - self.dim_t = 9 - self.dim_f = 3072 - self.n_fft = 6144 - self.denoise = True - self.pred = Predictor(self) - self.device = device - - def path_audio(self, input, vocal_root, others_root, format): - self.pred.prediction(input, vocal_root, others_root, format) diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/configs/transforms_config.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/configs/transforms_config.py deleted file mode 100644 index 330d067d5ba8e869a2a4312ec749cd3b3c6a179b..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/configs/transforms_config.py +++ /dev/null @@ -1,154 +0,0 @@ -from abc import abstractmethod -import torchvision.transforms as transforms -from datasets import augmentations - - -class TransformsConfig(object): - - def __init__(self, opts): - self.opts = opts - - @abstractmethod - def get_transforms(self): - pass - - -class EncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(EncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class FrontalizationTransforms(TransformsConfig): - - def __init__(self, opts): - super(FrontalizationTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class SketchToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SketchToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor()]), - } - return transforms_dict - - -class SegToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SegToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]) - } - return transforms_dict - - -class SuperResTransforms(TransformsConfig): - - def __init__(self, opts): - super(SuperResTransforms, self).__init__(opts) - - def get_transforms(self): - if self.opts.resize_factors is None: - self.opts.resize_factors = '1,2,4,8,16,32' - factors = [int(f) for f in self.opts.resize_factors.split(",")] - print("Performing down-sampling with factors: {}".format(factors)) - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Anurag I21 Crack 2014 Calendar Download and Install Guide.md b/spaces/raedeXanto/academic-chatgpt-beta/Anurag I21 Crack 2014 Calendar Download and Install Guide.md deleted file mode 100644 index 27a5426a5104a9b9ab7a9b789c0e1666d892e474..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Anurag I21 Crack 2014 Calendar Download and Install Guide.md +++ /dev/null @@ -1,131 +0,0 @@ - -

            Anurag I21 Crack 2014 Calendar: How to Download and Use It

            -

            If you are looking for a software that can help you create stunning calendars with your photos, you may have heard of Anurag I21 Crack 2014 Calendar. This is a pirated version of a popular photo editing software that claims to offer various features and tools for creating professional-looking calendars. But is it worth downloading and using? In this article, we will tell you everything you need to know about Anurag I21 Crack 2014 Calendar, including what it is, why you may need it, how to download it, how to use it, and what are its pros and cons. Read on to find out more.

            -

            Anurag I21 Crack 2014 Calendar


            Download File >>> https://tinourl.com/2uL1yj



            -

            Introduction

            -

            What is Anurag I21 Crack 2014 Calendar?

            -

            Anurag I21 Crack 2014 Calendar is a hacked version of Anurag i21, a photo editing software developed by Anurag Academy. Anurag i21 is a paid software that requires a license key to activate and use. However, some people have cracked the software and made it available for free download on the internet. Anurag I21 Crack 2014 Calendar is one of these cracked versions that has a calendar feature added to it. This feature allows you to create personalized calendars with your photos and customize them with various templates, backgrounds, fonts, colors, stickers, and effects.

            -

            Why do you need Anurag I21 Crack 2014 Calendar?

            -

            You may need Anurag I21 Crack 2014 Calendar if you want to create beautiful calendars with your photos without spending any money. Calendars are great gifts for your friends and family, as they can remind them of your special moments throughout the year. You can also use calendars for your own personal or professional purposes, such as planning your schedule, organizing your tasks, or promoting your business. With Anurag I21 Crack 2014 Calendar, you can create calendars that suit your style and needs.

            -

            How to download Anurag I21 Crack 2014 Calendar?

            -

            To download Anurag I21 Crack 2014 Calendar, you need to find a reliable source that offers the software for free. There are many websites that claim to provide the software, but some of them may be fake or malicious. You should be careful when downloading anything from the internet, as you may end up with viruses or malware that can harm your computer or data. You should also check the reviews and ratings of the website before downloading anything from it. You can also use a VPN or a proxy server to hide your IP address and location when downloading the software.

            -

            Features of Anurag I21 Crack 2014 Calendar

            -

            Easy to install and use

            -

            Anurag I21 Crack 2014 Calendar is easy to install and use. You just need to download the software from a trusted source and run the setup file. The installation process will take only a few minutes and you will not need any license key or activation code. Once installed, you can launch the software and start creating your calendars.

            -

            Compatible with Photoshop and other software

            -

            Anurag I21 Crack 2014 Calendar is compatible with Photoshop and other photo editing software. You can use it as a plugin or a standalone application. You can also import and export your photos from other software to Anurag I21 Crack 2014 Calendar. This makes it easy to edit your photos with different tools and effects before adding them to your calendars.

            -

            Offers various tools and effects for photo editing

            -

            Anurag I21 Crack 2014 Calendar offers various tools and effects for photo editing. You can use these tools and effects to enhance your photos and make them look more attractive. Some of these tools and effects include:

            -
              -
            • Crop: To cut out unwanted parts of your photos.
            • -
            • Rotate: To change the orientation of your photos.
            • -
            • Resize: To adjust the size of your photos.
            • -
            • Brightness: To increase or decrease the brightness of your photos.
            • -
            • Contrast: To increase or decrease the contrast of your photos.
            • -
            • Saturation: To increase or decrease the saturation of your photos.
            • -
            • Hue: To change the color tone of your photos.
            • -
            • Sharpen: To make your photos look sharper.
            • -
            • Blur: To make your photos look softer.
            • -
            • Noise: To add or remove noise from your photos.
            • -
            • Red Eye: To remove red eyes from your photos.
            • -
            • Blemish: To remove blemishes from your photos.
            • -
            • Whiten: To whiten teeth or eyes in your photos.
            • -
            • Glow: To add a glowing effect to your photos.
            • -
            • Sketch: To turn your photos into sketches.
            • -
            • Oil Paint: To turn your photos into oil paintings.
            • -
            • Cartoon: To turn your photos into cartoons.
            • -
            • Vignette: To add a dark border around your photos.
            • -
            • Frame: To add a frame around your photos.
            • -
            -

            Supports multiple languages and formats

            -

            Anurag I21 Crack 2014 Calendar supports multiple languages and formats. You can choose from different languages such as English, Hindi, Bengali, Gujarati, Marathi, Tamil, Telugu, Kannada, Malayalam, Urdu, Arabic, Persian, Chinese, Japanese, Korean, French, German, Spanish, Portuguese, Italian, Russian, Turkish etc. You can also choose from different formats such as JPG, PNG, BMP, TIFF, GIF, PDF etc. You can also customize the size, resolution, quality, and compression of your output files.

            -

            How to download Anurag I21 Crack for free
            -Anurag I21 Crack with Photoshop integration
            -Anurag I21 Crack full version download link
            -Anurag I21 Crack software review and tutorial
            -Anurag I21 Crack vs other photo editing tools
            -Anurag I21 Crack features and benefits
            -Anurag I21 Crack license key generator
            -Anurag I21 Crack installation and activation guide
            -Anurag I21 Crack system requirements and compatibility
            -Anurag I21 Crack customer support and feedback
            -Anurag I21 Crack alternatives and competitors
            -Anurag I21 Crack discount and coupon codes
            -Anurag I21 Crack updates and bug fixes
            -Anurag I21 Crack testimonials and case studies
            -Anurag I21 Crack best practices and tips
            -Anurag I21 Crack pros and cons
            -Anurag I21 Crack comparison with Anurag 10 Pro
            -Anurag I21 Crack for professional photographers
            -Anurag I21 Crack for beginners and hobbyists
            -Anurag I21 Crack for Windows and Mac OS
            -Anurag I21 Crack for portrait and landscape photography
            -Anurag I21 Crack for wedding and event photography
            -Anurag I21 Crack for fashion and glamour photography
            -Anurag I21 Crack for product and ecommerce photography
            -Anurag I21 Crack for wildlife and nature photography
            -Anurag I21 Crack for sports and action photography
            -Anurag I21 Crack for travel and street photography
            -Anurag I21 Crack for black and white photography
            -Anurag I21 Crack for HDR and panorama photography
            -Anurag I21 Crack for artistic and creative photography
            -Anurag I21 Crack for photo restoration and retouching
            -Anurag I21 Crack for skin smoothing and enhancement
            -Anurag I21 Crack for background removal and replacement
            -Anurag I21 Crack for color correction and adjustment
            -Anurag I21 Crack for light and shadow effects
            -Anurag I21 Crack for filters and presets
            -Anurag I21 Crack for text and watermarking
            -Anurag I21 Crack for cropping and resizing
            -Anurag I21 Crack for collage and montage making
            -Anurag I21 Crack for batch processing and automation
            -How to uninstall Anurag I21 Crack from your computer
            -How to fix errors and issues with Anurag I21 Crack
            -How to optimize your photos with Anurag I21 Crack
            -How to learn more about Anurag I21 Crack
            -How to get help with Anurag I21 Crack

            -

            How to use Anurag I21 Crack 2014 Calendar

            -

            Launch the software and select the calendar option

            -

            To use Anurag I21 Crack 2014 Calendar, you need to launch the software first. You will see a welcome screen with different options such as Photo Editing, Calendar, Collage, Album etc. You need to select the calendar option to start creating your calendars.

            -

            Choose the template and customize it according to your preferences

            -

            After selecting the calendar option, you will see a list of templates that you can choose from. You can browse through different categories such as Yearly, Monthly, Weekly, Daily etc. You can also search for specific templates by entering keywords in the search box. You can preview each template by clicking on it. Once you find a template that you like, you can select it by clicking on the OK button. You will then see a customization screen where you can change various settings such as year, month, day, language, font, color etc. You can also add text, stickers, or logos to personalize your calendar. You can also change the background image or color of your calendar by clicking on the Background button. You can undo or redo any changes by clicking on the Undo or Redo buttons. You can also save or load any templates by clicking on the Save or Load buttons.

            -

            Add your photos and edit them with the available tools

            -

            To add your photos to your calendar, you need to click on the Add Photo button. You will then see where you can browse through your computer folders and select the photos that you want to add. You can also drag and drop your photos from your desktop or any other source to the file explorer. You can add as many photos as you want to your calendar, but you should make sure that they are relevant and high-quality. You can also delete or rearrange your photos by clicking on them and using the Delete or Move buttons. After adding your photos, you can edit them with the available tools and effects. You can access these tools and effects by clicking on the Edit Photo button. You will then see a toolbar with different options such as Crop, Rotate, Resize, Brightness, Contrast, Saturation, Hue, Sharpen, Blur, Noise, Red Eye, Blemish, Whiten, Glow, Sketch, Oil Paint, Cartoon, Vignette, Frame etc. You can apply any of these options to your photos by clicking on them and adjusting the settings. You can also preview the changes by clicking on the Preview button. You can undo or redo any changes by clicking on the Undo or Redo buttons. You can also save or load any edits by clicking on the Save or Load buttons.

            -

            Save and print your calendar or share it online

            -

            After editing your photos and customizing your calendar, you can save and print your calendar or share it online. To save your calendar, you need to click on the Save button. You will then see a dialog box where you can choose the format, size, resolution, quality, and compression of your output file. You can also choose the destination folder where you want to save your file. You can also name your file and add a password to protect it. To print your calendar, you need to click on the Print button. You will then see a dialog box where you can choose the printer, paper size, orientation, margins, and copies of your output file. You can also preview your print by clicking on the Preview button. To share your calendar online, you need to click on the Share button. You will then see a dialog box where you can choose the platform, such as Facebook, Twitter, Instagram, WhatsApp, Email etc. You can also add a caption or a message to your post. You can also adjust the privacy settings of your post.

            -

            Pros and cons of Anurag I21 Crack 2014 Calendar

            -

            Pros: Affordable, versatile, user-friendly, high-quality output

            -

            Anurag I21 Crack 2014 Calendar has some advantages that may attract you to use it. Some of these advantages are:

            -
              -
            • Affordable: Anurag I21 Crack 2014 Calendar is free to download and use. You do not need to pay any money to create your calendars.
            • -
            • Versatile: Anurag I21 Crack 2014 Calendar offers various features and tools for photo editing and calendar creation. You can create calendars for any occasion, purpose, or theme.
            • -
            • User-friendly: Anurag I21 Crack 2014 Calendar is easy to install and use. You do not need any technical skills or experience to use it. The interface is simple and intuitive.
            • -
            • High-quality output: Anurag I21 Crack 2014 Calendar produces high-quality output files that look professional and attractive. You can customize the size, resolution, quality, and compression of your output files.
            • -
            -

            Cons: Illegal, risky, may contain viruses or malware, may damage your system or data

            -

            Anurag I21 Crack 2014 Calendar has some disadvantages that may discourage you from using it. Some of these disadvantages are:

            -
              -
            • Illegal: Anurag I21 Crack 2014 Calendar is an illegal software that violates the copyright and license agreement of Anurag Academy. You may face legal consequences if you use it.
            • -
            • Risky: Anurag I21 Crack 2014 Calendar is a risky software that may contain viruses or malware that can harm your computer or data. You may lose your data or compromise your security if you use it.
            • -
            • May contain viruses or malware: Anurag I21 Crack 2014 Calendar may contain viruses or malware that can infect your computer or data. You may experience slow performance, crashes, errors, pop-ups, ads, redirects etc if you use it.
            • -
            • May damage your system or data: Anurag I21 Crack 2014 Calendar may damage your system or data by deleting, modifying, encrypting, or corrupting them. You may lose access to your files or programs if you use it.
            • -
            -

            Conclusion

            -

            Summary of the main points

            -

            Anurag I21 Crack 2014 Calendar is a software that allows you to create personalized calendars with your photos and customize them with various templates, backgrounds, fonts, colors, stickers, and effects. It is easy to install and use and offers various features and tools for photo editing and calendar creation. It supports multiple languages and formats and produces high-quality output files. However, it is also an illegal software that violates the copyright and license agreement of Anurag Academy. It is also a risky software that may contain viruses or malware that can harm your computer or data. It may also damage your system or data by deleting, modifying, encrypting, or corrupting them.

            -

            Recommendation and advice

            -

            We do not recommend using Anurag I21 Crack 2014 Calendar for creating calendars with your photos. It is not worth risking your legal status, security, privacy, and data for a free software that may not work properly or safely. Instead, we advise you to use a legitimate software that offers similar features and tools for photo editing and calendar creation. You can find many such software online that are affordable, reliable, safe, and legal. Some examples are Canva (https://www.canva.com/), Fotor (https://www.fotor.com/), PicMonkey (https://www.picmonkey.com/), Adobe Spark (https://spark.adobe.com/), etc. These software are easy to use and offer various features and tools for photo editing and calendar creation. They support multiple languages and formats and produce high-quality output files. They also respect the copyright and license agreement of their developers and do not contain any viruses or malware that can harm your computer or data.

            -

            Frequently Asked Questions

            -

            Q1: What is the difference between Anurag i21 and Anurag I21 Crack 2014 Calendar?

            -

            A1: Anurag i21 is a photo editing software developed by Anurag Academy that requires a license key to activate and use. Anurag I21 Crack 2014 Calendar is a hacked version of Anurag i21 that does not require any license key to activate and use. It also has a calendar feature added to it.

            -

            Q2: How much does Anurag i21 cost?

            -

            A2: Anurag i21 costs Rs 7500 (approximately $100) for a single user license key that is valid for one year.

            -

            Q3: Is Anurag i21 compatible with Photoshop?

            -

            A3: Yes, Anurag i21 is compatible with Photoshop CS2 to CS6 versions.

            -

            Q4: How can I get a license key for Anurag i21?

            -

            A4: You can get a license key for Anurag i21 by purchasing it from the official website of Anurag Academy (http://www.anuragacademy.com/) or from their authorized dealers.

            -

            Q5: How can I contact Anurag Academy for support?

            -

            A5: You can contact Anurag Academy for support by calling them at +91-933-920-7251 / +91-933-920-7252 / +91-933-920-7253 / +91-933-920-7254 / +91-933-920-7255 / +91-933-920-7256 / +91-933-920-7257 / +91-933-920-7258 / +91-933-920-7259 / +91-933-920-7260 / +91-933-920-7261 / +91-933-920-7262 / +91-933-920-7263 / +91-933-920-7264 / +91- 933- 920- 7265 / +91- 933- 920- 7266 / +91- 933- 920- 7267 / +91- 933- 920- 7268 / +91- 933- 920- 7269 / +91- 933- 920- 7270 / +91- 933- 920- 7271 / +91- 933- 920- 7272 / +91- 933- 920- 7273 / +91- 933- 920- 7274 / +91- 933- 920- 7275 / +91- 933- 920- 7276 / +91- 933- 920- 7277 / +91- 920-7278 / +91-933-920-7279 / +91-933-920-7280 or emailing them at info@anuragacademy.com or support@anuragacademy.com. You can also visit their website (http://www.anuragacademy.com/) for more information.

            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Artcam Pro 9 Crack Free 20 The Best Software for CNC Machining and Engraving.md b/spaces/raedeXanto/academic-chatgpt-beta/Artcam Pro 9 Crack Free 20 The Best Software for CNC Machining and Engraving.md deleted file mode 100644 index 4ff8cbc84c75773f5146bfb70249bd6432e2c470..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Artcam Pro 9 Crack Free 20 The Best Software for CNC Machining and Engraving.md +++ /dev/null @@ -1,193 +0,0 @@ -
            -

            Artcam Pro 9 Crack Free 20: How to Download and Install the Software for CNC Machines

            -

            If you are looking for a software that can help you create stunning designs for CNC machines, you might have heard of Artcam Pro. Artcam Pro is a powerful and easy-to-use software that allows you to design, model, and carve your ideas in wood, metal, stone, or plastic. With Artcam Pro, you can create vectors, reliefs, textures, and colors for your models, and export them in various formats for CNC machines.

            -

            artcampro9crackfree20


            Download Zip 🆗 https://tinourl.com/2uKZyV



            -

            However, Artcam Pro is not a cheap software. The original version costs around $2000, which might be too expensive for some users. That's why some people look for a cracked version of Artcam Pro that they can download and install for free. But is it worth it? What are the benefits and drawbacks of using a cracked version of Artcam Pro? And how can you download and install it safely?

            -

            In this article, we will answer these questions and show you how to download and install Artcam Pro 9 crack free 20. We will also show you how to use Artcam Pro 9 crack free 20 to create amazing designs for CNC machines.

            -

            How to Download Artcam Pro 9 Crack Free 20

            -

            The first step to use Artcam Pro 9 crack free 20 is to download it from a reliable source. There are many websites that claim to offer a free download link for Artcam Pro 9 crack free 20, but not all of them are trustworthy. Some of them might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of them might also provide fake or outdated links that don't work.

            -

            How to install artcam pro 9.1 for cnc[^1^]
            -Artcam pro 9.1 software download for cnc - DESIGNS4CNC[^2^]
            -ArtCAM Pro 9 download latest configuration[^2^]
            -ArtCAM Pro 9.1 Crack download for free[^2^]
            -ArtCAM Pro 9.1 is basically the best software for developing 3D images[^2^]
            -ArtCAM pro 9 is a 2D and 3D CAD application[^2^]
            -ArtCAM Pro Free Download Full Version For PC/Window[^3^]
            -Artcam Pro 2012 Full Crack Pc[^3^]
            -Artcam Pro 9 Crack Free 20 | Peatix[^3^]
            -Free Artcam Pro 9 Utorrent Exe Registration Windows Cracked X32[^4^]
            -Artcam pro 9 replacement software
            -Artcam pro 9 tutorial pdf
            -Artcam pro 9 free download with crack
            -Artcam pro 9 system requirements
            -Artcam pro 9 serial number
            -Artcam pro 9 license key
            -Artcam pro 9 full version download
            -Artcam pro 9 windows 10 compatibility
            -Artcam pro 9 vs artcam 2018
            -Artcam pro 9 features and benefits
            -Artcam pro 9 engraving software
            -Artcam pro 9 wood carving design
            -Artcam pro 9 relief modeling
            -Artcam pro 9 vector tools
            -Artcam pro 9 v-bit carving
            -Artcam pro 9 import photos
            -Artcam pro 9 create vectors
            -Artcam pro 9 edit vectors
            -Artcam pro 9 toolpath generation
            -Artcam pro 9 machine simulation
            -Artcam pro 9 CNC router support
            -Artcam pro 9 file formats
            -Artcam pro 9 clipart library
            -Artcam pro 9 online training
            -Artcam pro 9 customer reviews
            -How to use artcam pro 9 effectively
            -How to update artcam pro 9 to artcam 2018
            -How to fix artcam pro 9 errors and bugs
            -How to uninstall artcam pro 9 completely
            -How to get artcam pro 9 for free legally

            -

            Therefore, you need to be careful when choosing a website to download Artcam Pro 9 crack free 20. Here are some tips to help you find a reliable source:

            -
              -
            • Check the reviews and ratings of the website. Look for positive feedback from other users who have downloaded Artcam Pro 9 crack free 20 from that website.
            • -
            • Check the date and size of the download link. Look for a recent and reasonable date and size that match with the original version of Artcam Pro 9.
            • -
            • Check the security and privacy of the website. Look for a secure connection (HTTPS) and a privacy policy that explains how they handle your data.
            • -
            -

            One website that we recommend for downloading Artcam Pro 9 crack free 20 is . This website has good reviews and ratings from other users who have downloaded Artcam Pro 9 crack free 20 from there. It also has a recent and reasonable date and size for the download link. And it has a secure connection and a privacy policy that protects your data.

            -

            To download Artcam Pro 9 crack free 20 from , follow these steps:

            -
              -
            1. Go to using your web browser.
            2. -
            3. Click on the download button on the page.
            4. -
            5. Wait for a few seconds until a new page opens.
            6. -
            7. Click on another download button on the new page.
            8. -
            9. Wait for another few seconds until a pop-up window appears.
            10. -
            11. Click on "Download anyway" on the pop-up window.
            12. -
            13. Choose a location on your computer where you want to save the file.
            14. -
            15. Wait for the download to finish.
            16. -
            -

            Congratulations! You have successfully downloaded Artcam Pro 9 crack free 20 from . The file name should be "ArtCAM_2010_SP4_DVD.iso". The file size should be around 3 GB.

            -

            How to Install Artcam Pro 9 Crack Free 20

            -

            The next step to use Artcam Pro 9 crack free 20 is to install it on your computer. To do this, you need two things:

            -
              -
            • The software file that you downloaded from . This is an ISO file that contains all the files needed to install Artcam Pro 9.
            • -
            • The crack file that activates Artcam Pro 9 without requiring a license key. This is a ZIP file that contains one file named "ArtCAMPro.exe".
            • -
            -

            You can find both files on . After downloading them, follow these steps:

            -
              -
            1. Extract both files using a software like WinRAR or WinZip.
            2. -
            3. Open the folder where you extracted the software file ("ArtCAM_2010_SP4_DVD").
            4. -
            5. Double-click on "setup.exe" to run the setup wizard.
            6. -
            7. Choose your language and click "Next".
            8. -
            9. Accept the license agreement and click "Next".
            10. -
            11. Choose your destination folder where you want to install Artcam Pro 9. The default location is "C:\Program Files\ArtCAM2010". You can change it if you want.
            12. -
            13. Click "Next" until you reach the end of the setup wizard.
            14. -
            15. Click "Install" to start installing Artcam Pro 9 on your computer.
            16. -
            17. Wait for the installation process to finish.
            18. -
            19. Click "Finish" when done.
            20. -
            -

            Congratulations! You have successfully installed Artcam Pro 9 on your computer. But don't launch it yet. You need one more step:

            -
              -
            1. Open the folder where you extracted the crack file ("ArtCAMPro.exe").
            2. -
            3. Copy and paste this file into your installation folder ("C:\Program Files\ArtCAM2010" or wherever you chose).
            4. -
            5. If prompted, choose to replace the original file with the crack file.
            6. -
            7. Click "Yes" to confirm.
            8. -
            -

            Congratulations! You have successfully activated Artcam Pro 9 crack free 20 on your computer. Now you can launch it and enjoy its features.

            -

            How to Use Artcam Pro 9 Crack Free 20

            -

            The final step to use Artcam Pro 9 crack free 20 is to create your own designs for CNC machines. To do this, you need to follow these steps:

            -
              -
            1. Launch Artcam Pro 9 crack free 20 from your desktop or start menu.
            2. -
            3. Explore the interface and tools of Artcam Pro 9. You will see a menu bar, a toolbar, a project window, a 2D view, a 3D view, and a status bar.
            4. -
            5. Create a new project or open an existing one. To create a new project, go to File > New Model and choose the size, resolution, and orientation of your model. To open an existing project, go to File > Open and select the project file you want to use.
            6. -
            7. Design your model using vectors, reliefs, textures, and colors. Vectors are the basic shapes that define your model. Reliefs are the 3D effects that add depth and dimension to your model. Textures are the patterns and effects that add detail and realism to your model. Colors are the colors and gradients that add visual appeal to your model.
            8. -
            -

            How to Create Vectors in Artcam Pro 9

            -

            Vectors are the basic shapes that define your model. You can create vectors using the vector creation tools on the toolbar or the menu bar. Here are some examples of how to create vectors in Artcam Pro 9:

            -
              -
            • To create a rectangle, click on the rectangle tool on the toolbar or go to Create Vectors > Rectangle. Then click and drag on the 2D view to draw a rectangle.
            • -
            • To create a circle, click on the circle tool on the toolbar or go to Create Vectors > Circle. Then click and drag on the 2D view to draw a circle.
            • -
            • To create a curve, click on the curve tool on the toolbar or go to Create Vectors > Curve. Then click and drag on the 2D view to draw a curve. You can adjust the shape of the curve by moving the control points.
            • -
            • To create a text, click on the text tool on the toolbar or go to Create Vectors > Text. Then type your text in the text window and click OK. You can change the font, size, alignment, and style of your text using the options on the text window.
            • -
            -

            You can also import vectors from other sources, such as images, PDF files, DXF files, etc. To import vectors, go to File > Import > Vector File and select the file you want to import.

            -

            How to Create Reliefs in Artcam Pro 9

            -

            Reliefs are the 3D effects that add depth and dimension to your model. You can create reliefs using the relief creation tools on the toolbar or the menu bar. Here are some examples of how to create reliefs in Artcam Pro 9:

            -
              -
            • To create a relief from a vector, select the vector and click on the relief tool on the toolbar or go to Relief > Create Relief From Vectors. Then choose the shape, height, and angle of the relief.
            • -
            • To create a relief from an image, go to File > Import > Bitmap Image and select the image file you want to use. Then click on the relief tool on the toolbar or go to Relief > Create Relief From Image. Then choose the height and contrast of the relief.
            • -
            • To create a relief from a texture, open the Relief Clipart Library and select a texture from the existing ones or your custom ones. Then click on the texture tool on the toolbar or go to Relief > Texture Relief. Then choose the height and scale of the texture.
            • -
            -

            You can also modify reliefs using various tools, such as smooth, sculpt, erase, blend, etc. To modify reliefs, select the relief and click on the tool you want to use on the toolbar or the menu bar.

            -

            How to Create Textures in Artcam Pro 9

            -

            Textures are the patterns and effects that add detail and realism to your model. You can create textures using the texture creation tools on the toolbar or the menu bar. Here are some examples of how to create textures in Artcam Pro 9:

            -
              -
            • To create a texture from a vector, select the vector and click on the texture tool on the toolbar or go to Texture > Create Texture From Vectors. Then choose the type, size, and angle of the texture.
            • -
            • To create a texture from an image, go to File > Import > Bitmap Image and select the image file you want to use. Then click on the texture tool on the toolbar or go to Texture > Create Texture From Image. Then choose the size and contrast of the texture.
            • -
            • To create a texture from a relief, select the relief and click on the texture tool on the toolbar or go to Texture > Create Texture From Relief. Then choose the type, size, and angle of the texture.
            • -
            -

            You can also modify textures using various tools, such as smooth, sculpt, erase, blend, etc. To modify textures, select the texture and click on the tool you want to use on the toolbar or the menu bar.

            -

            How to Create Colors in Artcam Pro 9

            -

            Colors are the colors and gradients that add visual appeal to your model. You can create colors using the color creation tools on the toolbar or the menu bar. Here are some examples of how to create colors in Artcam Pro 9:

            -
              -
            • To create a color from the color palette, click on the color tool on the toolbar or go to Color > Add Color. Then choose a color from the color palette or use the color picker to select a custom color.
            • -
            • To create a gradient from two colors, click on the gradient tool on the toolbar or go to Color > Add Gradient. Then choose two colors from the color palette or use the color picker to select custom colors. Then choose the direction and type of the gradient.
            • -
            • To create a color from an image, go to File > Import > Bitmap Image and select the image file you want to use. Then click on the color tool on the toolbar or go to Color > Add Color From Image. Then choose a color from the image using the color picker.
            • -
            -

            You can also modify colors using various tools, such as blend, invert, adjust, etc. To modify colors, select the color and click on the tool you want to use on the toolbar or the menu bar.

            -

            How to Export Your Model from Artcam Pro 9 Crack Free 20

            -

            The final step to use Artcam Pro 9 crack free 20 is to export your model in a format that can be used by CNC machines. To do this, you need to follow these steps:

            -
              -
            1. Choose the output format and settings for your model. There are different output formats for CNC machines, such as STL, DXF, G-code, etc. Each format has its own advantages and disadvantages, depending on your CNC machine and your design. You can compare some of the output formats in the table below:
            2. -
            - | Output Format | Description | Advantages | Disadvantages | | --- | --- | --- | --- | | STL | A 3D file format that represents your model as a mesh of triangles | - Compatible with most CNC machines and software | - Does not contain any information about colors, textures, or toolpaths | | DXF | A 2D file format that represents your model as a collection of vectors | - Compatible with most CNC machines and software | - Does not contain any information about reliefs, textures, or toolpaths | | G-code | A text file format that contains instructions for your CNC machine on how to move and control the tool | - Contains all the information about toolpaths, speeds, feeds, etc. | - Not compatible with all CNC machines and software |
              -
            1. Choose the output settings for your model. There are different output settings for CNC machines, such as resolution, tolerance, scale, origin, etc. Each setting affects how your model will look and perform on your CNC machine. You can find some of the recommended output settings in the table below:
            2. -
            - | Output Setting | Description | Recommended Value | | --- | --- | --- | | Resolution | The level of detail of your model in terms of pixels per inch (ppi) or dots per inch (dpi) | 300 ppi or dpi | | Tolerance | The maximum deviation between your model and its output representation in terms of distance or angle | 0.001 inch or 0.01 degree | | Scale | The ratio between your model size and its output size in terms of percentage or units | 100% or 1:1 | | Origin | The point on your model that corresponds to the zero point on your CNC machine in terms of coordinates (X,Y,Z) | Center or bottom left |
              -
            1. Export your model from Artcam Pro 9 crack free 20. To export your model, go to File > Export > Model File and select the output format you want to use. Then choose a location on your computer where you want to save the file. Then adjust the output settings according to your preferences. Then click OK.
            2. -
            -

            Congratulations! You have successfully exported your model from Artcam Pro 9 crack free 20. Now you can use it with your CNC machine.

            -

            Conclusion

            -

            In this article, we have shown you how to download and install Artcam Pro 9 crack free 20 on your computer. We have also shown you how to use Artcam Pro 9 crack free 20 to create amazing designs for CNC machines using vectors, reliefs, textures, and colors. And we have shown you how to export your model from Artcam Pro 9 crack free 20 in a format that can be used by CNC machines.

            -

            We hope you have enjoyed this article and learned something new. Artcam Pro 9 crack free 20 is a powerful and easy-to-use software that can help you create stunning designs for CNC machines. However, it is not a legal or safe software. It is a cracked version of Artcam Pro 9 that might contain viruses, malware, or spyware that can harm your computer or steal your personal information. It might also violate the intellectual property rights of Autodesk, the original developer of Artcam Pro 9.

            -

            Therefore, we do not recommend using Artcam Pro 9 crack free 20 for any purpose. If you want to use Artcam Pro 9 legally and safely, you should buy it from Autodesk or an authorized reseller. You will get a license key that will activate Artcam Pro 9 without requiring a crack file. You will also get access to updates, support, and features that will enhance your experience with Artcam Pro 9.

            -

            Thank you for reading this article. We hope you have found it useful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

            -

            FAQs

            -

            Here are some frequently asked questions about Artcam Pro 9 crack free 20:

            -
              -
            1. What are the system requirements for Artcam Pro 9?
            2. -
            -

            The minimum system requirements for Artcam Pro 9 are:

            -
              -
            • Operating system: Windows XP, Vista, 7, 8, or 10
            • -
            • Processor: Intel Pentium 4 or AMD Athlon XP or higher
            • -
            • Memory: 1 GB RAM or more
            • -
            • Hard disk space: 5 GB or more
            • -
            • Graphics card: 128 MB or more
            • -
            • Display resolution: 1024 x 768 or higher
            • -
            • Internet connection: Required for activation and updates
            • -
            -
              -
            1. Is Artcam Pro 9 compatible with Windows 10?
            2. -
            -

            Yes, Artcam Pro 9 is compatible with Windows 10. However, you might need to run it in compatibility mode for Windows 7 or 8. To do this, right-click on the Artcam Pro 9 shortcut on your desktop or start menu and select Properties. Then go to the Compatibility tab and check the box that says "Run this program in compatibility mode for". Then choose Windows 7 or 8 from the drop-down menu. Then click OK.

            -
              -
            1. What are the alternatives to Artcam Pro 9?
            2. -
            -

            There are many alternatives to Artcam Pro 9 that can help you create designs for CNC machines. Some of them are:

            -
              -
            • Vectric Aspire: A software that combines the features of Vectric VCarve and Vectric Cut3D. It allows you to create 2D and 3D designs using vectors, reliefs, textures, and colors. It also has a built-in toolpath generator that can export G-code for CNC machines.
            • -
            • Carveco Maker: A software that is based on Artcam Pro 9 but has been updated and improved by Carveco. It allows you to create 2D and 3D designs using vectors, reliefs, textures, and colors. It also has a built-in toolpath generator that can export G-code for CNC machines.
            • -
            • Fusion 360: A software that is developed by Autodesk and integrates CAD, CAM, and CAE features. It allows you to create 2D and 3D designs using sketches, solids, surfaces, and meshes. It also has a built-in toolpath generator that can export G-code for CNC machines.
            • -
            -
              -
            1. Is it legal to use a cracked version of Artcam Pro 9?
            2. -
            -

            No, it is not legal to use a cracked version of Artcam Pro 9. A cracked version of Artcam Pro 9 is a pirated version that has been modified to bypass the license key requirement. It violates the intellectual property rights of Autodesk, the original developer of Artcam Pro 9. It also violates the terms and conditions of use of Artcam Pro 9. Using a cracked version of Artcam Pro 9 can result in legal actions, fines, or penalties from Autodesk or other authorities.

            -
              -
            1. How can I get support for Artcam Pro 9?
            2. -
            -

            If you have bought Artcam Pro 9 from Autodesk or an authorized reseller, you can get support from Autodesk or your reseller. You can contact them by phone, email, chat, or web form. You can also access online resources such as documentation, tutorials, forums, blogs, etc.

            -

            If you have downloaded Artcam Pro 9 crack free 20 from an unreliable source, you cannot get support from Autodesk or your reseller. You can only rely on online resources such as documentation, tutorials, forums, blogs, etc. However, these resources might not be accurate, updated, or relevant to your issue. You might also encounter viruses, malware, or spyware that can harm your computer or steal your personal information.

            -

            Therefore, we recommend that you buy Artcam Pro 9 from Autodesk or an authorized reseller and get support from them. You will get a license key that will activate Artcam Pro 9 without requiring a crack file. You will also get access to updates, support, and features that will enhance your experience with Artcam Pro 9.

            -

            0a6ba089eb
            -
            -
            \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Autocad Map 3d 2011 Crack ((INSTALL)) Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Autocad Map 3d 2011 Crack ((INSTALL)) Download.md deleted file mode 100644 index fe90ee9ffe02ea634b655c24b4edc54e8b241a36..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Autocad Map 3d 2011 Crack ((INSTALL)) Download.md +++ /dev/null @@ -1,115 +0,0 @@ - -

            Autocad Map 3d 2011 Crack Download: A Comprehensive Guide

            -

            If you are looking for a way to create, edit, analyze, and share maps, plans, and drawings, you might be interested in Autocad Map 3d 2011. This is a powerful software that combines the features of AutoCAD with advanced GIS functions. However, this software is not cheap, and you might not be able to afford it or find a legitimate license. That's why some people resort to downloading a crack version of Autocad Map 3d 2011, which allows them to use the software for free without activation.

            -

            Autocad Map 3d 2011 Crack Download


            DOWNLOAD ✺✺✺ https://tinourl.com/2uL1vY



            -

            But is it safe and legal to download Autocad Map 3d 2011 crack? How do you download, install, and use it? What are the risks and precautions of using cracked software? And are there any alternatives to Autocad Map 3d 2011 crack? In this article, we will answer all these questions and more. We will provide you with a comprehensive guide on how to download Autocad Map 3d 2011 crack, as well as its features, benefits, tips, tricks, errors, solutions, risks, precautions, and alternatives. By the end of this article, you will have all the information you need to decide whether to use Autocad Map 3d 2011 crack or not.

            -

            Features and Benefits of Autocad Map 3d 2011

            -

            Autocad Map 3d 2011 is a software that allows you to create and edit maps, plans, and drawings with powerful tools and data sources. You can use it for various purposes such as planning, design, engineering, construction, management, analysis, visualization, presentation, and collaboration. Some of the features and benefits of Autocad Map 3d 2011 are:

            -
              -
            • Create and edit maps, plans, and drawings with powerful tools: You can use Autocad Map 3d 2011 to create maps from various data sources such as CAD, GIS, raster, and web services. You can also edit existing maps and drawings with tools such as move, copy, rotate, scale, trim, extend, offset, and more. You can also use commands such as draw, erase, undo, redo, zoom, pan, and more to modify your maps and drawings.
            • -
            • Analyze and visualize spatial data with advanced GIS functions: You can use Autocad Map 3d 2011 to perform spatial analysis on your maps and data. You can use functions such as buffer, overlay, dissolve, clip, intersect, union, and more to create new data layers from existing ones. You can also use functions such as query, select, filter, classify, label, symbolize, and more to display and manipulate your data according to your needs. You can also create thematic maps, charts, reports, and legends to visualize your data in different ways.
            • -
            • Share and collaborate with other users and applications: You can use Autocad Map 3d 2011 to share and collaborate with other users and applications. You can export and import your maps and data in various formats such as DWG, DXF, SHP, KML, XML, CSV, and more. You can also publish your maps and data to the web or to a server using web services or FDO providers. You can also access and edit your maps and data from other applications such as AutoCAD Civil 3D, AutoCAD Architecture, AutoCAD Mechanical, AutoCAD Electrical, and more.
            • -
            -

            These are just some of the features and benefits of Autocad Map 3d 2011. There are many more that you can explore and use once you download the software.

            -

            How to Download Autocad Map 3d 2011 Crack

            -

            If you want to download Autocad Map 3d 2011 crack, you will need to find a torrent site that offers software downloads. A torrent site is a website that hosts torrent files or magnet links that allow users to download files from other users using a peer-to-peer network. Torrent sites are often used to share illegal or pirated content such as movies, music, games, software, and more.

            -

            However, not all torrent sites are safe and reliable. Some of them may contain malware or viruses that can harm your device or data. Some of them may also have fake or corrupted files that will not work properly. Therefore, you need to be careful when choosing a torrent site for downloading Autocad Map 3d 2011 crack. Here are some tips on how to find a reliable torrent site for software:

            -

            -
              -
            • Check the reputation and reviews of the torrent site: You can use online tools such as Trustpilot, Sitejabber, or WOT to check the reputation and reviews of the torrent site. These tools will show you the ratings and feedbacks of other users who have used the torrent site before. You can also use online forums or communities such as Reddit or Quora to ask for recommendations or opinions from other users who have downloaded software from torrent sites.
            • -
            • Check the quality and quantity of the torrent files or magnet links: You can use online tools such as Torrentz2, Torrents.io, or TorrentSeeker to check the quality and quantity of the torrent files or magnet links available on the torrent site. These tools will show you the number of seeders (users who have the complete file) and leechers (users who are downloading the file) for each torrent file or magnet link. They will also show you the size, name, date, and comments of each torrent file or magnet link. You can use these information to compare and choose the best torrent file or magnet link for downloading Autocad Map 3d 2011 crack.
            • -
            • Check the security and privacy of the torrent site: You can use online tools such as VirusTotal, URLVoid, or ScanURL to check the security and privacy of the torrent site. These tools will scan the torrent site for any malware, viruses, phishing, or malicious content that can harm your device or data. They will also show you the reputation and trustworthiness of the torrent site based on various sources and indicators. You can also use online tools such as Whois, IP Location, or DNS Lookup to check the domain name, IP address, location, and owner of the torrent site. You can use these information to verify the identity and legitimacy of the torrent site.
            • -
            -

            Once you have found a reliable torrent site for software, you can follow these steps to download Autocad Map 3d 2011 crack:

            -
              -
            1. Find a torrent file or magnet link for Autocad Map 3d 2011 crack: You can use the search function or browse the categories of the torrent site to find a torrent file or magnet link for Autocad Map 3d 2011 crack. You can also use online tools such as Torrentz2, Torrents.io, or TorrentSeeker to search for multiple torrent sites at once. You can also use online tools such as Google, Bing, or DuckDuckGo to search for keywords such as "Autocad Map 3d 2011 crack torrent" or "Autocad Map 3d 2011 crack magnet link". You can also use online tools such as Bit Che, qBittorrent, or Tixati to search for torrent files or magnet links from within your torrent client.
            2. -
            3. Download the torrent file or magnet link: Once you have found a torrent file or magnet link for Autocad Map 3d 2011 crack, you can download it by clicking on it or copying and pasting it into your browser. You will need a torrent client to download the torrent file or magnet link. A torrent client is a software that allows you to download files from other users using a peer-to-peer network. Some of the popular torrent clients are uTorrent, BitTorrent, Vuze, Deluge, and Transmission. You can download and install any of these torrent clients from their official websites.
            4. -
            5. Open the torrent file or magnet link with a torrent client: Once you have downloaded the torrent file or magnet link, you can open it with your torrent client by double-clicking on it or dragging and dropping it into your torrent client. Your torrent client will then start downloading the files from other users who have the same torrent file or magnet link. You can see the progress and status of your download in your torrent client.
            6. -
            7. Choose a destination folder and start the download: Before you start downloading the files, you can choose a destination folder where you want to save them. You can do this by clicking on the "Options" or "Preferences" button in your torrent client and selecting a folder in your device. You can also create a new folder if you want. Once you have chosen a destination folder, you can start the download by clicking on the "Start" or "Resume" button in your torrent client.
            8. -
            -

            These are the steps to download Autocad Map 3d 2011 crack from a torrent site. Depending on the size of the files and the speed of your internet connection, it may take some time to complete the download. You can pause or stop the download at any time if you want.

            -

            How to Install Autocad Map 3d 2011 Crack

            -

            After you have downloaded Autocad Map 3d 2011 crack from a torrent site, you will need to install it on your device. However, before you do that, you will need to disable your antivirus and firewall software. This is because most antivirus and firewall software will detect and block cracked software as potential threats. Therefore, you will need to disable them temporarily while installing Autocad Map 3d 2011 crack. You can do this by clicking on the icon of your antivirus or firewall software in your system tray and selecting "Disable" or "Turn off". You can also do this by opening your antivirus or firewall software and changing its settings.

            -

            Once you have disabled your antivirus and firewall software, you can follow these steps to install Autocad Map 3d 2011 crack:

            -
              -
            1. Extract the downloaded files with a file archiver: The files that you have downloaded from a torrent site are usually compressed in a ZIP, RAR, or ISO format. You will need to extract them with a file archiver such as WinRAR, 7-Zip, or PowerISO. You can download and install any of these file archivers from their official websites. Once you have installed a file archiver, you can right-click on the downloaded files and select "Extract here" or "Extract to" and choose a folder in your device. You can also open the downloaded files with your file archiver and extract them manually.
            2. -
            3. Run the setup file and follow the instructions: After you have extracted the downloaded files, you will find a setup file that will install Autocad Map 3d 2011 on your device. The setup file may have different names such as "setup.exe", "install.exe", "autorun.exe", or "Autocad Map 3d 2011.exe". You can run the setup file by double-clicking on it or right-clicking on it and selecting "Run as administrator". You will then see a window that will guide you through the installation process. You will need to accept the terms and conditions, choose a language, select a destination folder, and click on "Next" or "Install" until the installation is complete.
            4. -
            5. Copy the crack file and paste it into the installation folder: After you have installed Autocad Map 3d 2011 on your device, you will need to copy the crack file and paste it into the installation folder. The crack file is a file that will bypass the activation process and allow you to use Autocad Map 3d 2011 for free without a license. The crack file may have different names such as "crack.exe", "patch.exe", "keygen.exe", or "Autocad Map 3d 2011 Crack.exe". You can find the crack file in the same folder where you extracted the downloaded files. You can copy the crack file by right-clicking on it and selecting "Copy" or pressing Ctrl+C. You can then go to the installation folder where you installed Autocad Map 3d 2011 on your device. The installation folder may be located in C:\Program Files\Autodesk\Autocad Map 3d 2011 or C:\Program Files (x86)\Autodesk\Autocad Map 3d 2011 depending on your device. You can paste the crack file by right-clicking on an empty space in the installation folder and selecting "Paste" or pressing Ctrl+V.
            6. -
            7. Launch Autocad Map 3d 2011 and enjoy: After you have copied the crack file and pasted it into the installation folder, you can launch Autocad Map 3d 2011 and enjoy using it for free without activation. You can launch Autocad Map 3d 2011 by double-clicking on its icon on your desktop or start menu. You can also launch Autocad Map 3d 2011 by going to the installation folder and double-clicking on the crack file or the original executable file. You will then see a window that will open Autocad Map 3d 2011 on your device.
            8. -
            -

            These are the steps to install Autocad Map 3d 2011 crack on your device. You can now use Autocad Map 3d 2011 for free without activation.

            -

            How to Use Autocad Map 3d 2011 Crack

            -

            Now that you have installed Autocad Map 3d 2011 crack on your device, you can use it to create and edit maps, plans, and drawings with powerful tools and data sources. However, if you are new to Autocad Map 3d 2011 or cracked software, you may encounter some difficulties or errors when using it. Therefore, we will provide you with some tips and tricks for using Autocad Map 3d 2011 effectively, as well as some common errors and solutions for using Autocad Map 3d 2011 crack. Here are some of them:

            -

            Tips and tricks for using Autocad Map 3d 2011 effectively

            -

            Autocad Map 3d 2011 is a complex and powerful software that requires some skills and knowledge to use it effectively. Here are some tips and tricks that can help you improve your productivity and creativity when using Autocad Map 3d 2011:

            -
              -
            • Use the help and tutorials: Autocad Map 3d 2011 comes with a comprehensive help system and tutorials that can guide you through the basic and advanced features and functions of the software. You can access the help and tutorials by clicking on the "Help" or "Tutorials" button in the software or by pressing F1 on your keyboard. You can also access the online help and tutorials by visiting the official website of Autodesk or by searching online for keywords such as "Autocad Map 3d 2011 help" or "Autocad Map 3d 2011 tutorials". You can learn a lot from the help and tutorials and improve your skills and knowledge of Autocad Map 3d 2011.
            • -
            • Use the keyboard shortcuts: Autocad Map 3d 2011 has many keyboard shortcuts that can help you perform various tasks faster and easier. You can use keyboard shortcuts to access commands, tools, menus, options, views, modes, and more. You can also customize your own keyboard shortcuts to suit your preferences and needs. You can find the list of keyboard shortcuts by clicking on the "Tools" or "Options" button in the software or by pressing Alt+F11 on your keyboard. You can also find the list of keyboard shortcuts online by searching for keywords such as "Autocad Map 3d 2011 keyboard shortcuts". You can save time and effort by using keyboard shortcuts when using Autocad Map 3d 2011.
            • -
            • Use the templates and presets: Autocad Map 3d 2011 has many templates and presets that can help you create and edit maps, plans, and drawings faster and easier. You can use templates and presets to apply predefined settings, styles, formats, layouts, symbols, colors, and more to your maps and drawings. You can also create your own templates and presets to suit your preferences and needs. You can find the templates and presets by clicking on the "File" or "New" button in the software or by pressing Ctrl+N on your keyboard. You can also find the templates and presets online by searching for keywords such as "Autocad Map 3d 2011 templates" or "Autocad Map 3d 2011 presets". You can save time and effort by using templates and presets when using Autocad Map 3d 2011.
            • -
            -

            These are just some of the tips and tricks for using Autocad Map 3d 2011 effectively. There are many more that you can discover and use once you start using the software.

            -

            Common errors and solutions for using Autocad Map 3d 2011 crack

            -

            Autocad Map 3d 2011 crack is a cracked version of Autocad Map 3d 2011 that allows you to use the software for free without activation. However, using cracked software can also cause some errors or problems that can affect your performance and experience. Therefore, we will provide you with some common errors and solutions for using Autocad Map 3d 2011 crack. Here are some of them:

            -
              -
            • Error: Invalid serial number or product key: This error may occur when you try to install or launch Autocad Map 3d 2011 crack. This error means that the serial number or product key that you entered is not valid or recognized by the software. This may happen because the serial number or product key is already used by another user, expired, blocked, or corrupted. To fix this error, you can try the following solutions:
                -
              • Use a different serial number or product key that is valid and unused. You can find various serial numbers or product keys online by searching for keywords such as "Autocad Map 3d 2011 serial number" or "Autocad Map 3d 2011 product key". However, be careful when using online serial numbers or product keys as they may not work properly or may contain malware or viruses.
              • -
              • Use a keygen or a patch that can generate a valid and unused serial number or product key for Autocad Map 3d 2011. A keygen or a patch is a software that can create a serial number or product key for a specific software. You can find various keygens or patches online by searching for keywords such as "Autocad Map 3d 2011 keygen" or "Autocad Map 3d 2011 patch". However, be careful when using online keygens or patches as they may not work properly or may contain malware or viruses.
              • -
              • Use a crack file that can bypass the activation process and allow you to use Autocad Map 3d 2011 without a serial number or product key. A crack file is a file that modifies the original executable file of a software and removes its protection mechanisms. You can find various crack files online by searching for keywords such as "Autocad Map 3d 2011 crack". However, be careful when using online crack files as they may not work properly or may contain malware or viruses.
              • -
              -
            • -
            • Error: Failed to initialize FDO provider: This error may occur when you try to access or use data sources from FDO providers in Autocad Map 3d 2011 crack. FDO providers are software components that allow you to connect to and manipulate data from various sources such as databases, web services, files, and more. This error means that the FDO provider that you selected is not installed, configured, registered, or compatible with Autocad Map 3d 2011. To fix this error, you can try the following solutions:
                -
              • Install, configure, register, or update the FDO provider that you want to use. You can find various FDO providers online by searching for keywords such as "Autocad Map 3d 2011 FDO provider". However, be careful when using online FDO providers as they may not work properly or may contain malware or viruses.
              • -
              • Select a different FDO provider that is installed, configured, registered, and compatible with Autocad Map 3d 2011 crack are:

                -
                  -
                • Legal issues and ethical concerns: Using cracked software is illegal and unethical, as it violates the intellectual property rights of the software developers and distributors. You may face legal consequences such as fines, lawsuits, or criminal charges if you are caught using cracked software. You may also face ethical consequences such as losing your reputation, credibility, or trust among your peers, clients, or employers if you are found using cracked software. You may also harm the software industry and the innovation and quality of software products if you use cracked software.
                • -
                • Potential malware and virus infections: Using cracked software exposes you to potential malware and virus infections, as cracked software may contain malicious code or hidden programs that can harm your device or data. You may download malware or viruses from torrent sites, file archivers, keygens, patches, or crack files that can infect your device or software. You may also activate malware or viruses when you run or install cracked software that can damage your device or data. You may lose your files, data, or personal information, or experience slowdowns, crashes, errors, or glitches on your device or software due to malware or virus infections.
                • -
                • Cyberattacks from hackers and criminals: Using cracked software exposes you to cyberattacks from hackers and criminals who may exploit your device and data. You may connect to unsecured or compromised networks or servers when you download or use cracked software that can expose your device and data to hackers and criminals. You may also share your device and data with other users when you use peer-to-peer networks or web services that can expose your device and data to hackers and criminals. You may lose your files, data, or personal information, or experience identity theft, fraud, blackmail, ransomware, phishing, or other cybercrimes due to cyberattacks from hackers and criminals.
                • -
                -

                These are some of the risks of using Autocad Map 3d 2011 crack. There may be other risks that you may face when using Autocad Map 3d 2011 crack. You should be aware of these risks and weigh them against the benefits of using Autocad Map 3d 2011 crack.

                -

                Precautions of using Autocad Map 3d 2011 crack

                -

                Some of the precautions of using Autocad Map 3d 2011 crack are:

                -
                  -
                • Use a VPN service: A VPN service is a service that creates a secure and encrypted connection between your device and a remote server. A VPN service can help you protect your device and data from malware, viruses, hackers, and criminals when you download or use cracked software. A VPN service can also help you bypass geo-restrictions, censorship, or firewalls that may prevent you from accessing torrent sites, file archivers, keygens, patches, or crack files. You can find various VPN services online by searching for keywords such as "VPN service". However, be careful when using online VPN services as they may not work properly or may contain malware or viruses.
                • -
                • Use a sandbox or a virtual machine: A sandbox or a virtual machine is a software that creates a separate and isolated environment on your device where you can run or install other software without affecting your device or data. A sandbox or a virtual machine can help you protect your device and data from malware, viruses, hackers, and criminals when you download or install cracked software. A sandbox or a virtual machine can also help you test and evaluate cracked software without affecting your device or data. You can find various sandbox or virtual machine software online by searching for keywords such as "sandbox software" or "virtual machine software". However, be careful when using online sandbox or virtual machine software as they may not work properly or may contain malware or viruses.
                • -
                • Use a backup or a recovery tool: A backup or a recovery tool is a software that creates a copy or a restore point of your device or data that you can use to recover your device or data in case of any damage or loss. A backup or a recovery tool can help you protect your device and data from malware, viruses, hackers, and criminals when you download or use cracked software. A backup or a recovery tool can also help you restore your device or data to its original state before using cracked software. You can find various backup or recovery tools online by searching for keywords such as "backup tool" or "recovery tool". However, be careful when using online backup or recovery tools as they may not work properly or may contain malware or viruses.
                • -
                -

                These are some of the precautions of using Autocad Map 3d 2011 crack. There may be other precautions that you may take when using Autocad Map 3d 2011 crack. You should be careful and responsible when using Autocad Map 3d 2011 crack.

                -

                Alternatives to Autocad Map 3d 2011 Crack

                -

                If you are not comfortable or satisfied with using Autocad Map 3d 2011 crack, you may want to consider some alternatives to Autocad Map 3d 2011 crack. There are some free and open-source software for mapping and GIS that you can use legally and ethically without any risks or precautions. There are also some paid software for mapping and GIS that you can use legally and ethically with some trial versions or discounts. Here are some of the alternatives to Autocad Map 3d 2011 crack:

                -

                Free and open-source software for mapping and GIS

                -

                Some of the free and open-source software for mapping and GIS are:

                -
                  -
                • QGIS: QGIS is a free and open-source software that allows you to create, edit, analyze, and share maps, plans, and drawings with powerful tools and data sources. You can use QGIS for various purposes such as planning, design, engineering, construction, management, analysis, visualization, presentation, and collaboration. You can download and install QGIS from its official website: https://www.qgis.org/.
                • -
                • GRASS GIS: GRASS GIS is a free and open-source software that allows you to perform spatial analysis on your maps and data with advanced GIS functions. You can use GRASS GIS for various purposes such as modeling, simulation, statistics, geostatistics, image processing, remote sensing, raster and vector processing, and more. You can download and install GRASS GIS from its official website: https://grass.osgeo.org/.
                • -
                • MapServer: MapServer is a free and open-source software that allows you to publish your maps and data to the web or to a server using web services or FDO providers. You can use MapServer for various purposes such as web mapping, web GIS, spatial data infrastructure, geospatial web applications, and more. You can download and install MapServer from its official website: https://mapserver.org/.
                • -
                -

                These are just some of the free and open-source software for mapping and GIS. There are many more that you can explore and use legally and ethically without any risks or precautions.

                -

                Paid software for mapping and GIS with trial versions or discounts

                -

                Some of the paid software for mapping and GIS with trial versions or discounts are:

                -
                  -
                • ArcGIS: ArcGIS is a paid software that allows you to create, edit, analyze, and share maps, plans, and drawings with powerful tools and data sources. You can use ArcGIS for various purposes such as planning, design, engineering, construction, management, analysis, visualization, presentation, and collaboration. You can buy or subscribe to ArcGIS from its official website: https://www.esri.com/en-us/industries/overview. You can also try ArcGIS for free for 21 days by signing up for a trial version: https://www.esri.com/en-us/industries/overview/free-trial.
                • -
                • Global Mapper: Global Mapper is a paid software that allows you to perform spatial analysis on your maps and data with advanced GIS functions. You can use Global Mapper for various purposes such as modeling, simulation, statistics, geostatistics, image processing, remote sensing, raster and vector processing, and more. You can buy or subscribe to Global Mapper from its official website: https://www.bluemarblegeo.com/products/global-mapper.php. You can also try Global Mapper for free for 14 days by downloading a trial version: https://www.bluemarblegeo.com/products/global-mapper-download.php.
                • -
                • MapInfo Pro: MapInfo Pro is a paid software that allows you to publish your maps and data to the web or to a server using web services or FDO providers. You can use MapInfo Pro for various purposes such as web mapping, web GIS, spatial data infrastructure, geospatial web applications, and more. You can buy or subscribe to MapInfo Pro from its official website: https://www.precisely.com/product/precisely-mapinfo/mapinfo-pro. You can also try MapInfo Pro for free for 30 days by downloading a trial version: https://www.precisely.com/product/precisely-mapinfo/mapinfo-pro-trial.
                • -
                -

                These are just some of the paid software for mapping and GIS with trial versions or discounts. There are many more that you can explore and use legally and ethically with some trial versions or discounts.

                -

                Conclusion

                -

                In this article, we have provided you with a comprehensive guide on how to download Autocad Map 3d 2011 crack, as well as its features, benefits, tips, tricks, errors, solutions, risks, precautions, and alternatives. We hope that this article has helped you understand the pros and cons of using Autocad Map 3d 2011 crack and make an informed decision whether to use it or not.

                -

                If you decide to use Autocad Map 3d 2011 crack, we advise you to be careful and responsible when downloading, installing, and using it. We also advise you to take some precautions such as using a VPN service, a sandbox or a virtual machine, and a backup or a recovery tool when using Autocad Map 3d 2011 crack. We also advise you to be aware of the legal issues and ethical concerns of using cracked software and respect the intellectual property rights of the software developers and distributors.

                -

                If you decide not to use Autocad Map 3d 2011 crack , we advise you to consider some alternatives to Autocad Map 3d 2011 crack such as free and open-source software for mapping and GIS or paid software for mapping and GIS with trial versions or discounts. These alternatives can help you create and edit maps, plans, and drawings with powerful tools and data sources legally and ethically without any risks or precautions.

                -

                We hope that this article has been useful and informative for you. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading and have a great day!

                -

                FAQs

                -

                Here are some of the frequently asked questions about Autocad Map 3d 2011 crack:

                -
                  -
                1. What is Autocad Map 3d 2011?
                2. -

                  Autocad Map 3d 2011 is a software that allows you to create and edit maps, plans, and drawings with powerful tools and data sources. You can use it for various purposes such as planning, design, engineering, construction, management, analysis, visualization, presentation, and collaboration.

                  -
                3. What is Autocad Map 3d 2011 crack?
                4. -

                  Autocad Map 3d 2011 crack is a cracked version of Autocad Map 3d 2011 that allows you to use the software for free without activation. It is a file that modifies the original executable file of the software and removes its protection mechanisms.

                  -
                5. How to download Autocad Map 3d 2011 crack?
                6. -

                  To download Autocad Map 3d 2011 crack, you will need to find a torrent site that offers software downloads. You will then need to find a torrent file or magnet link for Autocad Map 3d 2011 crack. You will then need to download the torrent file or magnet link with a torrent client. You will then need to choose a destination folder and start the download.

                  -
                7. How to install Autocad Map 3d 2011 crack?
                8. -

                  To install Autocad Map 3d 2011 crack, you will need to disable your antivirus and firewall software. You will then need to extract the downloaded files with a file archiver. You will then need to run the setup file and follow the instructions. You will then need to copy the crack file and paste it into the installation folder. You will then need to launch Autocad Map 3d 2011 and enjoy.

                  -
                9. How to use Autocad Map 3d 2011 crack?
                10. -

                  To use Autocad Map 3d 2011 crack, you will need to learn the basic and advanced features and functions of the software. You will also need to learn some tips and tricks for using the software effectively. You will also need to learn some common errors and solutions for using the software crack. You will also need to be aware of the risks and precautions of using the software crack.

                  -

                b2dd77e56b
                -
                -
                \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Ayatul Kursi Arabic Pdf Downloadl Extra Quality.md b/spaces/raedeXanto/academic-chatgpt-beta/Ayatul Kursi Arabic Pdf Downloadl Extra Quality.md deleted file mode 100644 index eda0148bd17fac74bc11b9dd4f604098bc923014..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Ayatul Kursi Arabic Pdf Downloadl Extra Quality.md +++ /dev/null @@ -1,39 +0,0 @@ -
                -

                Ayatul Kursi: The Verse of the Throne in Arabic and English

                -

                Ayatul Kursi is one of the most powerful and well-known verses of the Quran. It is the 255th verse of Surah Al-Baqarah, the second and longest chapter of the Quran. Ayatul Kursi means "The Verse of the Throne" because it describes some of the attributes and actions of Allah, who is the Lord of the Throne.

                -

                Ayatul Kursi Arabic Pdf Downloadl


                DOWNLOAD ———>>> https://tinourl.com/2uL5nD



                -

                In this article, we will provide you with a transliteration, translation, and explanation of Ayatul Kursi in Arabic and English. We will also share with you some of the benefits and virtues of reciting this verse regularly. You can also download a pdf file of Ayatul Kursi in Arabic text from the links below.

                -

                Ayatul Kursi Transliteration

                -

                Here is how to pronounce Ayatul Kursi in Arabic using the Roman alphabet:

                -
                -Allahu la ilaha illa huwa Al-Hayyul-Qayyum
                -La ta'khudhuhu sinatun wa la nawm
                -Lahu ma fi as-samawati wa ma fi al-ard
                -Man dhal-ladhi yashfa'u 'indahu illa bi-idhnihi
                -Ya'lamu ma bayna aydihim wa ma khalfahum
                -Wa la yuhituna bi shay'in min 'ilmihi illa bima sha'a
                -Wasi'a kursiyyuhu as-samawati wa al-ard
                -Wa la ya'uduhu hifdhuhuma wa huwa al-'Aliyyu al-'Adheem
                -
                -

                Ayatul Kursi Translation

                -

                Here is the meaning of Ayatul Kursi in English according to a popular translation:

                -

                -
                -Allah! There is no god ˹worthy of worship˺ except Him, the Ever-Living, All-Sustaining.
                -Neither drowsiness nor sleep overtakes Him.
                -To Him belongs whatever is in the heavens and whatever is on the earth.
                -Who could possibly intercede with Him without His permission?
                -He ˹fully˺ knows what is ahead of them and what is behind them, but no one can grasp any of His knowledge—except what He wills ˹to reveal˺.
                -His Seat encompasses the heavens and the earth, and the preservation of both does not tire Him.
                -For He is the Most High, the Greatest.
                -
                -

                Ayatul Kursi Explanation

                -

                Ayatul Kursi is a comprehensive statement of Allah's oneness, power, knowledge, and sovereignty. It affirms that Allah is the only true God who deserves worship, and that He is unlike any of His creation. He is eternal, self-sufficient, and free from any weakness or need. He has absolute control over everything that exists, and nothing can happen without His will or permission. He knows everything that was, is, and will be, and nothing can escape His awareness or comprehension. He is above His throne, which extends over the heavens and the earth, and He maintains them with ease. He is exalted in might and majesty, and none can compare to Him.

                -

                Ayatul Kursi Benefits

                -

                Ayatul Kursi has many benefits and virtues for those who recite it regularly. Some of them are:

                -
                  -
                • It protects from evil and harm. The Prophet Muhammad (peace be upon him) said: "Whoever recites Ayatul Kursi after every obligatory prayer, nothing can prevent him from entering Paradise except death." (Sunan an-Nasa'i)
                • -
                • It grants peace and tranquility. The Prophet Muhammad (peace be upon him) said: "When you lie down in your bed, recite Ayatul Kursi until you finish it. Then Allah will send a guardian for you who will stay with you until you wake up." (Al-Bukhari)
                • -
                • It increases one's faith and knowledge. The Prophet Muhammad (peace be upon him) said: "The one who recites Ayatul Kursi after every prayer will have his faith increased." (

                  cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bukufarmakopeindonesiaedisi3 What You Need to Know About the Latest Standards for Drugs in Indonesia.md b/spaces/raedeXanto/academic-chatgpt-beta/Bukufarmakopeindonesiaedisi3 What You Need to Know About the Latest Standards for Drugs in Indonesia.md deleted file mode 100644 index dd60a13aab517868cb388cd0a5998b7a23ff2aef..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bukufarmakopeindonesiaedisi3 What You Need to Know About the Latest Standards for Drugs in Indonesia.md +++ /dev/null @@ -1,120 +0,0 @@ - -

                  Buku Farmakope Indonesia Edisi 3: A Comprehensive Guide for Pharmacists

                  -

                  If you are a pharmacist or a student of pharmacy in Indonesia, you might have heard of Buku Farmakope Indonesia Edisi 3. But what is it exactly and why is it important? In this article, we will explain everything you need to know about this book, including its definition, contents, usage, benefits, and how to get it. Read on to find out more.

                  -

                  bukufarmakopeindonesiaedisi3


                  Downloadhttps://tinourl.com/2uL1qS



                  -

                  What is Buku Farmakope Indonesia Edisi 3?

                  -

                  Buku Farmakope Indonesia Edisi 3 is the third edition of the Indonesian Pharmacopoeia, which is the official book of standards for drugs and pharmaceutical preparations in Indonesia. It was published by the National Agency of Drug and Food Control (BPOM) in 1979 and revised in 1984.

                  -

                  Definition and purpose of Buku Farmakope Indonesia Edisi 3

                  -

                  A pharmacopoeia is a book that contains information on the quality, purity, identity, strength, composition, and testing methods of drugs and pharmaceutical preparations. It also provides guidelines on how to prepare, store, dispense, and use them safely and effectively. A pharmacopoeia serves as a reference for pharmacists, physicians, researchers, manufacturers, regulators, and consumers of drugs and pharmaceutical products.

                  -

                  Buku Farmakope Indonesia Edisi 3 is the official pharmacopoeia of Indonesia, which means that it sets the standards for all drugs and pharmaceutical preparations that are produced, imported, distributed, sold, or used in Indonesia. It aims to ensure the quality, safety, efficacy, and uniformity of drugs and pharmaceutical products in the country. It also reflects the current scientific knowledge and technological advances in the field of pharmacy.

                  -

                  History and development of Buku Farmakope Indonesia Edisi 3

                  -

                  The first edition of the Indonesian Pharmacopoeia was published in 1950 by the Ministry of Health. It was based on the Dutch Pharmacopoeia (Pharmacopoea Neerlandica) and the British Pharmacopoeia (BP). It contained about 500 monographs on drugs and pharmaceutical preparations.

                  -

                  The second edition was published in 1967 by the Directorate General of Pharmacy under the Ministry of Health. It was based on the BP and the United States Pharmacopoeia (USP). It contained about 700 monographs on drugs and pharmaceutical preparations.

                  -

                  The third edition was published in 1979 by the BPOM under the Ministry of Health. It was based on the BP, USP, International Pharmacopoeia (Ph.Int.), European Pharmacopoeia (Ph.Eur.), Japanese Pharmacopoeia (JP), Indian Pharmacopoeia (IP), Chinese Pharmacopoeia (CP), and other sources. It contained about 900 monographs on drugs and pharmaceutical preparations.

                  -

                  What are the contents of Buku Farmakope Indonesia Edisi 3?

                  -

                  Buku Farmakope Indonesia Edisi 3 consists of three main parts: general chapters, monographs, and appendices.

                  -

                  General chapters

                  -

                  The general chapters provide general information and guidance on various aspects of drugs and pharmaceutical preparations, such as definitions, terminology, nomenclature, classification, labeling, packaging, storage, stability, preservation, sterilization, disinfection, biological tests, chemical tests, physical tests, microbiological tests, validation, calibration, quality control, quality assurance, good manufacturing practices (GMP), good laboratory practices (GLP), good clinical practices (GCP), and pharmacovigilance.

                  -

                  Monographs

                  -

                  The monographs provide specific information and standards for individual drugs and pharmaceutical preparations, such as chemical name, molecular formula, molecular weight, structural formula, synonyms, description, identification tests, assay methods, impurity limits, content limits, potency limits, dose limits, dosage forms, routes of administration, indications, contraindications, precautions, warnings, adverse effects, interactions, overdose management, and references.

                  -

                  buku farmakope indonesia edisi 3 pdf download
                  -buku farmakope indonesia edisi 3 free download
                  -buku farmakope indonesia edisi 3 online
                  -buku farmakope indonesia edisi 3 ebook
                  -buku farmakope indonesia edisi 3 pdf scribd
                  -buku farmakope indonesia edisi 3 pdf google drive
                  -buku farmakope indonesia edisi 3 pdf gratis
                  -buku farmakope indonesia edisi 3 pdf full
                  -buku farmakope indonesia edisi 3 pdf terbaru
                  -buku farmakope indonesia edisi 3 pdf lengkap
                  -buku farmakope indonesia edisi 3 hardcopy
                  -buku farmakope indonesia edisi 3 cetak
                  -buku farmakope indonesia edisi 3 original
                  -buku farmakope indonesia edisi 3 resmi
                  -buku farmakope indonesia edisi 3 terbitan bpom
                  -buku farmakope indonesia edisi 3 harga
                  -buku farmakope indonesia edisi 3 murah
                  -buku farmakope indonesia edisi 3 diskon
                  -buku farmakope indonesia edisi 3 promo
                  -buku farmakope indonesia edisi 3 jual
                  -buku farmakope indonesia edisi 3 beli
                  -buku farmakope indonesia edisi 3 cari
                  -buku farmakope indonesia edisi 3 review
                  -buku farmakope indonesia edisi 3 rekomendasi
                  -buku farmakope indonesia edisi 3 testimoni
                  -buku farmakope indonesia edisi 3 pengarang
                  -buku farmakope indonesia edisi 3 penulis
                  -buku farmakope indonesia edisi 3 editor
                  -buku farmakope indonesia edisi 3 penerbit
                  -buku farmakope indonesia edisi 3 tahun terbit
                  -buku farmakope indonesia edisi 3 isbn
                  -buku farmakope indonesia edisi 3 halaman
                  -buku farmakope indonesia edisi 3 ukuran
                  -buku farmakope indonesia edisi 3 berat
                  -buku farmakope indonesia edisi 3 kualitas
                  -buku farmakope indonesia edisi 3 konten
                  -buku farmakope indonesia edisi 3 materi
                  -buku farmakope indonesia edisi 3 isi
                  -buku farmakope indonesia edisi 3 sinopsis
                  -buku farmakope indonesia edisi 3 daftar isi
                  -buku farmakope indonesia edisi 4 pdf download[^2^]
                  -download gratis ebook pdf Farmakopé Indonesia Ed. V[^2^]
                  -Farmacopoea Indonesiae Ed. III[^1^]
                  -Farmacopoea Indonesiae Ed. III PDF[^1^]
                  -Farmacopoea Indonesiae Ed. III online[^1^]
                  -Farmacopoea Indonesiae Ed. III ebook[^1^]
                  -Farmacopoea Indonesiae Ed. III free download[^1^]
                  -Farmacopoea Indonesiae Ed. III scribd[^1^]
                  -Farmacopoea Indonesiae Ed. III google drive[^1^]
                  -Farmacopoea Indonesiae Ed. III gratis[^1^]

                  -

                  Appendices

                  -

                  The appendices provide supplementary information and data on various topics related to drugs and pharmaceutical preparations, such as abbreviations, symbols, units, conversion factors, atomic weights, molecular weights, melting points, boiling points, solubility, density, viscosity, refractive index, optical rotation, specific gravity, pH, buffer solutions, reagents, indicators, colorimetric standards, spectrophotometric standards, chromatographic standards, titration standards, calibration standards, reference substances, reference spectra, reference chromatograms, and tables.

                  -

                  How to use Buku Farmakope Indonesia Edisi 3?

                  -

                  Buku Farmakope Indonesia Edisi 3 is intended to be used as a reference book for pharmacists and other professionals involved in the production, distribution, sale, or use of drugs and pharmaceutical preparations in Indonesia. It is also useful for students and researchers of pharmacy and related fields. Here are some general principles and guidelines on how to use Buku Farmakope Indonesia Edisi 3:

                  -

                  General principles and guidelines

                  -
                    -
                  • Read the general chapters carefully before using the monographs or appendices. The general chapters provide essential information and guidance on various aspects of drugs and pharmaceutical preparations.
                  • -
                  • Follow the standards and specifications given in the monographs for individual drugs and pharmaceutical preparations. The monographs provide specific information and standards for individual drugs and pharmaceutical preparations.
                  • -
                  • Refer to the appendices for supplementary information and data on various topics related to drugs and pharmaceutical preparations. The appendices provide supplementary information and data on various topics related to drugs and pharmaceutical preparations.
                  • -
                  • Use the latest edition of Buku Farmakope Indonesia Edisi as your primary reference. The latest edition of Buku Farmakope Indonesia Edisi reflects the current scientific knowledge and technological advances in the field of pharmacy.
                  • -
                  • Consult other sources of information if necessary. Buku Farmakope Indonesia Edisi is not intended to cover all aspects of drugs and pharmaceutical preparations. You may need to consult other sources of information such as textbooks, journals, guidelines, regulations, or experts if you encounter any problems or questions.
                  • -
                  -

                  Examples of using Buku Farmakope Indonesia Edisi 3

                  -

                  To illustrate how to use Buku Farmakope Indonesia Edisi 3, here are some examples of common scenarios that pharmacists may encounter:

                  - - - - - - - -
                  ScenarioHow to use Buku Farmakope Indonesia Edisi 3
                  You want to prepare a solution of sodium chloride injection according to the Indonesian Essential Drugs List (DOEN).You can find the monograph for sodium chloride injection in Buku Farmakope Indonesia Edisi 3 under "Sodium Chloride Injection". You can follow the specifications given in the monograph for preparing, testing, labeling, storing, and dispensing the solution. You can also refer to the general chapters for guidance you should be careful about the authenticity and legality of these websites. You should also check the quality and format of the PDF file before downloading it. - -

                  Offline sources

                  -

                  If you prefer to get Buku Farmakope Indonesia Edisi 3 offline, you can buy it from various places, such as:

                  -
                    -
                  • The official bookstore of BPOM (https://www.pom.go.id/bookstore/). You can buy Buku Farmakope Indonesia Edisi 3 as a hard copy book from this bookstore. You can also find other books related to BPOM's activities and services.
                  • -
                  • The official bookstore of IAI (https://www.iai.or.id/bookstore/). You can buy Buku Farmakope Indonesia Edisi 3 as a hard copy book from this bookstore. You can also find other books related to IAI's activities and services.
                  • -
                  • The official bookstore of GPFI (https://www.gpfi.or.id/bookstore/). You can buy Buku Farmakope Indonesia Edisi 3 as a hard copy book from this bookstore. You can also find other books related to GPFI's activities and services.
                  • -
                  • The official bookstore of ISMAFARSI (https://www.ismafarsi.or.id/bookstore/). You can buy Buku Farmakope Indonesia Edisi 3 as a hard copy book from this bookstore. You can also find other books related to ISMAFARSI's activities and services.
                  • -
                  • Other bookstores that sell Buku Farmakope Indonesia Edisi 3 as a hard copy book. Some examples are Tokopedia (https://www.tokopedia.com/), Shopee (https://shopee.co.id/), Bukalapak (https://www.bukalapak.com/), etc. However, you should be careful about the price and availability of these bookstores. You should also check the condition and edition of the book before buying it.
                  • -
                  -

                  Conclusion

                  -

                  Buku Farmakope Indonesia Edisi 3 is a comprehensive guide for pharmacists and pharmaceutical industry in Indonesia. It provides reliable and updated standards and specifications for drugs and pharmaceutical preparations in Indonesia. It also provides comprehensive and relevant information and guidance on various aspects of drugs and pharmaceutical preparations. It has many benefits for pharmacists and pharmaceutical industry, as well as for patients and public health in Indonesia. It can be accessed online or offline through various sources.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about Buku Farmakope Indonesia Edisi 3:

                  -
                    -
                  1. What is the difference between Buku Farmakope Indonesia Edisi 3 and other editions?
                  2. -

                    Buku Farmakope Indonesia Edisi 3 is the third edition of the Indonesian Pharmacopoeia, which was published in 1979 and revised in 1984. It is based on various sources such as BP, USP, Ph.Int., Ph.Eur., JP, IP, CP, etc. It contains about 900 monographs on drugs and pharmaceutical preparations. Other editions are either older or newer than Buku Farmakope Indonesia Edisi 3, which may have different sources, contents, or formats.

                    -
                  3. How often is Buku Farmakope Indonesia Edisi 3 updated?
                  4. -

                    Buku Farmakope Indonesia Edisi 3 is not updated regularly, since it is an old edition of the Indonesian Pharmacopoeia. The latest edition of the Indonesian Pharmacopoeia is Buku Farmakope Indonesia Edisi 6, which was published in 2020. It is updated every five years by BPOM.

                    -
                  5. Is Buku Farmakope Indonesia Edisi 3 still valid and relevant?
                  6. -

                    Buku Farmakope Indonesia Edisi 3 is still valid and relevant for some drugs and pharmaceutical preparations that are produced, imported, distributed, sold, or used in Indonesia. However, it may not reflect the current scientific knowledge and technological advances in the field of pharmacy. Therefore, it is recommended to use the latest edition of the Indonesian Pharmacopoeia, which is Buku Farmakope Indonesia Edisi 6, as your primary reference.

                    -
                  7. Where can I get more information about Buku Farmakope Indonesia Edisi 3?
                  8. -

                    You can get more information about Buku Farmakope Indonesia Edisi 3 from various sources, such as BPOM, IAI, GPFI, ISMAFARSI, or other websites that provide Buku Farmakope Indonesia Edisi 3 as a PDF file or a hard copy book. You can also contact us if you have any questions or feedback about this article.

                    -
                  9. How can I use Buku Farmakope Indonesia Edisi 3 for my academic or professional purposes?
                  10. -

                    You can use Buku Farmakope Indonesia Edisi 3 for your academic or professional purposes by following the general principles and guidelines on how to use it. You can also refer to the examples of using Buku Farmakope Indonesia Edisi 3 for common scenarios that pharmacists may encounter. You can also cite Buku Farmakope Indonesia Edisi 3 as a source of information for your research or report.

                    -
                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dj Mixer Free Download Crack [BETTER].md b/spaces/raedeXanto/academic-chatgpt-beta/Dj Mixer Free Download Crack [BETTER].md deleted file mode 100644 index 999fa8faf116678c79dfa967988378eb91949d53..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dj Mixer Free Download Crack [BETTER].md +++ /dev/null @@ -1,34 +0,0 @@ - -

                  DJ Mixer Free Download Crack: How to Get the Best DJ Software for Free

                  -

                  If you are looking for a DJ software that can help you mix music and songs professionally, you might be tempted to download a cracked version of a popular program like VirtualDJ or DJ Music Mixer. However, this is not a good idea for several reasons. Here are some of the risks and disadvantages of using a DJ mixer free download crack:

                  -

                  Dj Mixer Free Download Crack


                  Download Filehttps://tinourl.com/2uL13L



                  -
                    -
                  • You might get infected with malware or viruses that can harm your computer or steal your personal information.
                  • -
                  • You might face legal issues or fines for violating the copyright laws and terms of service of the software developers.
                  • -
                  • You might miss out on the latest updates, features, and support from the official software providers.
                  • -
                  • You might experience bugs, errors, or crashes that can ruin your performance or damage your equipment.
                  • -
                  • You might compromise the quality and integrity of your music and reputation as a DJ.
                  • -
                  -

                  Instead of using a DJ mixer free download crack, you should consider some of the legitimate ways to get the best DJ software for free or at a low cost. Here are some of the options you can try:

                  -
                    -
                  • Download the free version of VirtualDJ from their official website[^1^]. This version is fully functional and compatible with most controllers and mixers on the market. It also supports video and audio mixing, real-time audio separation, and many other cutting-edge features. You can use it for home use or non-commercial purposes without any limitations.
                  • -
                  • Download the free trial version of DJ Music Mixer Pro from their official website[^2^]. This version allows you to test all the features of the software for 10 days. You can mix music and songs with powerful equalizers, extract and convert audio from videos, record your live mixes, and more.
                  • -
                  • Look for discounts or promotions from the official software providers or their partners. You might be able to get a lower price or a free license if you meet certain criteria or conditions. For example, VirtualDJ offers free licenses to students, schools, non-profits, radio stations, and some hardware manufacturers[^3^].
                  • -
                  • Use an alternative free or open-source DJ software that meets your needs and preferences. There are many other programs that you can download and use legally without paying anything. Some examples are Mixxx, Cross DJ Free, UltraMixer Free Edition, and Zulu DJ Software.
                  • -
                  -

                  As you can see, there are many ways to get the best DJ software for free without resorting to a DJ mixer free download crack. By choosing a legal and safe option, you can enjoy mixing music and songs without any worries or hassles. You can also support the developers who work hard to create and improve these amazing tools for DJs.

                  How to Use VirtualDJ: A Beginner's Guide

                  -

                  VirtualDJ is one of the most popular and versatile DJ software on the market. It allows you to mix music and songs from your computer, stream online tracks, apply effects, record your mixes, and more. Whether you want to practice at home, perform at parties, or broadcast online, VirtualDJ can help you achieve your DJing goals. In this article, we will show you how to use VirtualDJ as a beginner and give you some tips and tricks to improve your skills.

                  -

                  Step 1: Install and Launch VirtualDJ

                  -

                  To use VirtualDJ, you first need to download and install the program. You can get it for free from their official website[^1^], or you can buy a license to unlock more features and support. The installation process is simple and straightforward, just follow the on-screen instructions. Once the program is installed, you can launch it by double-clicking on the icon on your desktop or in your applications folder.

                  -

                  Step 2: Choose a Skin and Layout

                  -

                  When you open VirtualDJ for the first time, you will be asked to choose a skin. A skin is the appearance of the program, and it can affect how you interact with it. There are different skins available for different types of DJs, such as scratchers, controllers, turntablists, etc. You can also customize the skin by changing the colors, fonts, elements, etc. To choose a skin, click on the "Config" button at the top right corner of the screen, then go to "Interface" and select a skin from the list. You can also download more skins from their website or create your own.

                  -

                  After choosing a skin, you can also choose a layout. A layout is the arrangement of the elements on the screen, such as decks, mixer, browser, effects, etc. You can choose a layout that suits your style and preferences by clicking on the "Layout" button at the top left corner of the screen. You can also resize and move the elements by dragging them with your mouse.

                  -

                  -

                  Step 3: Import Your Music Library

                  -

                  To start mixing music with VirtualDJ, you need to import your music library into the program. You can do this by dragging and dropping your folders or files into the browser section at the bottom of the screen. You can also use the "Browse" feature to locate and add your music from your computer or external devices. VirtualDJ supports various audio formats, such as MP3, WAV, FLAC, OGG, etc.

                  -

                  Once you import your music library, VirtualDJ will analyze your tracks and display information such as BPM (beats per minute), key (musical scale), waveform (visual representation of sound), etc. This information will help you choose and match tracks that sound good together. You can also edit this information by right-clicking on a track and selecting "Tag Editor".

                  -

                  Step 4: Load and Play Tracks

                  -

                  To load a track into a deck (the virtual turntable), you can either drag and drop it from the browser section or use the "Load" button on each deck. You can also use keyboard shortcuts or MIDI controllers to load tracks faster. To play a track, you can either click on the "Play" button on each deck or use keyboard shortcuts or MIDI controllers. You can also use other buttons such as "Cue", "Pause", "Stop", etc., to control the playback of each track.

                  -

                  To adjust the volume of each track, you can use the faders (the vertical sliders) on the mixer section in the middle of the screen. You can also use other knobs and buttons such as "Gain", "EQ", "Filter", etc., to modify the sound of each track. To blend two tracks together smoothly, you can use the crossfader (the horizontal slider) on the mixer section. You can also use other features such as "Sync", "Pitch", "Keylock", etc., to match the tempo and key of each track.

                  81aa517590
                  -
                  -
                  \ No newline at end of file diff --git "a/spaces/rainy3/chatgpt_academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/rainy3/chatgpt_academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index 7c6a7ffb5cb2c42e6543c75d6ad9dd643f412cd9..0000000000000000000000000000000000000000 --- "a/spaces/rainy3/chatgpt_academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,29 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/ramki123/testing/app.py b/spaces/ramki123/testing/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/ramki123/testing/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/rbarman/Audio_Separation_Spleeter/app.py b/spaces/rbarman/Audio_Separation_Spleeter/app.py deleted file mode 100644 index 9fdcaa85f6862867e196b81673d20ad5831b2bdf..0000000000000000000000000000000000000000 --- a/spaces/rbarman/Audio_Separation_Spleeter/app.py +++ /dev/null @@ -1,131 +0,0 @@ -import streamlit as st -import os -import tempfile -import subprocess - -# Set Streamlit app title -st.title("Audio Separation App") - -# Function to process the audio file -def separate_audio(audio_path): - - print(f"{audio_path=}") - head, tail = os.path.split(audio_path) - - gradio_temp_path = head - audio_filename = tail.split('.')[0] - print(f"{gradio_temp_path=}") - print(f"{audio_filename=}") - - command = f"spleeter separate -p spleeter:2stems {audio_path}" - command = command.split() - print(f"{command=}") - - result = subprocess.run(command) - print(result) - - print("--------") - accompaniment_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/accompaniment.wav" - vocals_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/vocals.wav" - print(f"{accompaniment_path=}") - print(os.path.exists(accompaniment_path)) - print(f"{vocals_path=}") - print(os.path.exists(vocals_path)) - - return vocals_path, accompaniment_path - - -def separate_audio_by_stem(audio_path, stem_count): - - print(f"{audio_path=}") - head, tail = os.path.split(audio_path) - - gradio_temp_path = head - audio_filename = tail.split('.')[0] - print(f"{gradio_temp_path=}") - print(f"{audio_filename=}") - print(f"{stem_count=}") - - command = f"spleeter separate -p spleeter:{stem_count}stems {audio_path}" - command = command.split() - print(f"{command=}") - - result = subprocess.run(command) - print(result) - - if stem_count == 2: - accompaniment_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/accompaniment.wav" - vocals_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/vocals.wav" - - print(f"{accompaniment_path=} \t exists: {os.path.exists(accompaniment_path)}") - print(f"{vocals_path=} \t exists: {os.path.exists(vocals_path)}") - - return [ - {'description': 'Accompaniment', 'path':accompaniment_path}, - {'description': 'Vocals', 'path':vocals_path}, - ] - - elif stem_count == 4: - - vocals_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/vocals.wav" - drums_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/drums.wav" - bass_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/bass.wav" - other_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/other.wav" - - print(f"{vocals_path=} \t exists: {os.path.exists(vocals_path)}") - print(f"{drums_path=} \t exists: {os.path.exists(drums_path)}") - print(f"{bass_path=} \t exists: {os.path.exists(bass_path)}") - print(f"{other_path=} \t exists: {os.path.exists(other_path)}") - - return [ - {'description': 'Vocals', 'path':vocals_path}, - {'description': 'Drums', 'path':drums_path}, - {'description': 'Bass', 'path':bass_path}, - {'description': 'Other', 'path':other_path}, - ] - - elif stem_count == 5: - - piano_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/piano.wav" - vocals_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/vocals.wav" - drums_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/drums.wav" - bass_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/bass.wav" - other_path = f"{gradio_temp_path}/separated_audio/{audio_filename}/other.wav" - - print(f"{piano_path=} \t exists: {os.path.exists(vocals_path)}") - print(f"{vocals_path=} \t exists: {os.path.exists(vocals_path)}") - print(f"{drums_path=} \t exists: {os.path.exists(drums_path)}") - print(f"{bass_path=} \t exists: {os.path.exists(bass_path)}") - print(f"{other_path=} \t exists: {os.path.exists(other_path)}") - - return [ - {'description': 'Vocals', 'path':vocals_path}, - {'description': 'Piano', 'path':piano_path}, - {'description': 'Drums', 'path':drums_path}, - {'description': 'Bass', 'path':bass_path}, - {'description': 'Other', 'path':other_path}, - ] - -# Streamlit app content -st.write("Upload an audio file") - -uploaded_file = st.file_uploader("Choose a file", type=["wav","mp3"]) -selected_stem_count = st.radio("Select stem count", (2,4,5)) - -if uploaded_file is not None: - - if st.button("Submit"): - - # Save the uploaded file to a temporary location - with tempfile.NamedTemporaryFile(delete=False) as temp_file: - temp_file.write(uploaded_file.read()) - temp_file_path = temp_file.name - - # Process the uploaded audio file - separate_audios = separate_audio_by_stem(temp_file_path, selected_stem_count) - - # Display the output files for download - st.write("Output Files:") - for audio in separate_audios: - st.write(audio['description']) - st.audio(audio['path'], format="audio/wav", start_time=0) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (gitanjali By Rabindranath Tagore Pdf).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (gitanjali By Rabindranath Tagore Pdf).md deleted file mode 100644 index 299852747d6b17584b142cc57fbb411148ac7b82..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (gitanjali By Rabindranath Tagore Pdf).md +++ /dev/null @@ -1,6 +0,0 @@ -

                  HD Online Player (gitanjali by rabindranath tagore pdf)


                  DOWNLOAD https://urlgoal.com/2uCJP8



                  - -The writings of Nobel Prize winner Rabindranath Tagore come to life in this collection of tales set in early-20th-century Bengal. Creators:Anurag Basu. Watch all ... 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/gfl_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/gfl_head.py deleted file mode 100644 index 12eb89db8c9c9336955d7ef40d6636d122537908..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/gfl_head.py +++ /dev/null @@ -1,648 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, bbox_overlaps, build_assigner, - build_sampler, images_to_levels, multi_apply, - reduce_mean, unmap) -from mmdet.core.utils import filter_scores_and_topk -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -class Integral(nn.Module): - """A fixed layer for calculating integral result from distribution. - - This layer calculates the target location by :math: `sum{P(y_i) * y_i}`, - P(y_i) denotes the softmax vector that represents the discrete distribution - y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max} - - Args: - reg_max (int): The maximal value of the discrete set. Default: 16. You - may want to reset it according to your new dataset or related - settings. - """ - - def __init__(self, reg_max=16): - super(Integral, self).__init__() - self.reg_max = reg_max - self.register_buffer('project', - torch.linspace(0, self.reg_max, self.reg_max + 1)) - - def forward(self, x): - """Forward feature from the regression head to get integral result of - bounding box location. - - Args: - x (Tensor): Features of the regression head, shape (N, 4*(n+1)), - n is self.reg_max. - - Returns: - x (Tensor): Integral result of box locations, i.e., distance - offsets from the box center in four directions, shape (N, 4). - """ - x = F.softmax(x.reshape(-1, self.reg_max + 1), dim=1) - x = F.linear(x, self.project.type_as(x)).reshape(-1, 4) - return x - - -@HEADS.register_module() -class GFLHead(AnchorHead): - """Generalized Focal Loss: Learning Qualified and Distributed Bounding - Boxes for Dense Object Detection. - - GFL head structure is similar with ATSS, however GFL uses - 1) joint representation for classification and localization quality, and - 2) flexible General distribution for bounding box locations, - which are supervised by - Quality Focal Loss (QFL) and Distribution Focal Loss (DFL), respectively - - https://arxiv.org/abs/2006.04388 - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='GN', num_groups=32, requires_grad=True). - loss_qfl (dict): Config of Quality Focal Loss (QFL). - bbox_coder (dict): Config of bbox coder. Defaults - 'DistancePointBBoxCoder'. - reg_max (int): Max value of integral set :math: `{0, ..., reg_max}` - in QFL setting. Default: 16. - init_cfg (dict or list[dict], optional): Initialization config dict. - Example: - >>> self = GFLHead(11, 7) - >>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]] - >>> cls_quality_score, bbox_pred = self.forward(feats) - >>> assert len(cls_quality_score) == len(self.scales) - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_dfl=dict(type='DistributionFocalLoss', loss_weight=0.25), - bbox_coder=dict(type='DistancePointBBoxCoder'), - reg_max=16, - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='gfl_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.reg_max = reg_max - super(GFLHead, self).__init__( - num_classes, - in_channels, - bbox_coder=bbox_coder, - init_cfg=init_cfg, - **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - - self.integral = Integral(self.reg_max) - self.loss_dfl = build_loss(loss_dfl) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - assert self.num_anchors == 1, 'anchor free version' - self.gfl_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.gfl_reg = nn.Conv2d( - self.feat_channels, 4 * (self.reg_max + 1), 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.prior_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification and quality (IoU) - joint scores for all scale levels, each is a 4D-tensor, - the channel number is num_classes. - bbox_preds (list[Tensor]): Box distribution logits for all - scale levels, each is a 4D-tensor, the channel number is - 4*(n+1), n is max value of integral set. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls and quality joint scores for a single - scale level the channel number is num_classes. - bbox_pred (Tensor): Box distribution logits for a single scale - level, the channel number is 4*(n+1), n is max value of - integral set. - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.gfl_cls(cls_feat) - bbox_pred = scale(self.gfl_reg(reg_feat)).float() - return cls_score, bbox_pred - - def anchor_center(self, anchors): - """Get anchor centers from anchors. - - Args: - anchors (Tensor): Anchor list with shape (N, 4), "xyxy" format. - - Returns: - Tensor: Anchor centers with shape (N, 2), "xy" format. - """ - anchors_cx = (anchors[..., 2] + anchors[..., 0]) / 2 - anchors_cy = (anchors[..., 3] + anchors[..., 1]) / 2 - return torch.stack([anchors_cx, anchors_cy], dim=-1) - - def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights, - bbox_targets, stride, num_total_samples): - """Compute loss of a single scale level. - - Args: - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - cls_score (Tensor): Cls and quality joint scores for each scale - level has shape (N, num_classes, H, W). - bbox_pred (Tensor): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor - weight shape (N, num_total_anchors, 4). - stride (tuple): Stride in this scale level. - num_total_samples (int): Number of positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert stride[0] == stride[1], 'h stride is not equal to w stride!' - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(-1, 4 * (self.reg_max + 1)) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - score = label_weights.new_zeros(labels.shape) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0] - - weight_targets = cls_score.detach().sigmoid() - weight_targets = weight_targets.max(dim=1)[0][pos_inds] - pos_bbox_pred_corners = self.integral(pos_bbox_pred) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchor_centers, pos_bbox_pred_corners) - pos_decode_bbox_targets = pos_bbox_targets / stride[0] - score[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), - pos_decode_bbox_targets, - is_aligned=True) - pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1) - target_corners = self.bbox_coder.encode(pos_anchor_centers, - pos_decode_bbox_targets, - self.reg_max).reshape(-1) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=weight_targets, - avg_factor=1.0) - - # dfl loss - loss_dfl = self.loss_dfl( - pred_corners, - target_corners, - weight=weight_targets[:, None].expand(-1, 4).reshape(-1), - avg_factor=4.0) - else: - loss_bbox = bbox_pred.sum() * 0 - loss_dfl = bbox_pred.sum() * 0 - weight_targets = bbox_pred.new_tensor(0) - - # cls (qfl) loss - loss_cls = self.loss_cls( - cls_score, (labels, score), - weight=label_weights, - avg_factor=num_total_samples) - - return loss_cls, loss_bbox, loss_dfl, weight_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Cls and quality scores for each scale - level has shape (N, num_classes, H, W). - bbox_preds (list[Tensor]): Box distribution logits for each scale - level with shape (N, 4*(n+1), H, W), n is max value of integral - set. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.prior_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, losses_dfl,\ - avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - labels_list, - label_weights_list, - bbox_targets_list, - self.prior_generator.strides, - num_total_samples=num_total_samples) - - avg_factor = sum(avg_factor) - avg_factor = reduce_mean(avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / avg_factor, losses_bbox)) - losses_dfl = list(map(lambda x: x / avg_factor, losses_dfl)) - return dict( - loss_cls=losses_cls, loss_bbox=losses_bbox, loss_dfl=losses_dfl) - - def _get_bboxes_single(self, - cls_score_list, - bbox_pred_list, - score_factor_list, - mlvl_priors, - img_meta, - cfg, - rescale=False, - with_nms=True, - **kwargs): - """Transform outputs of a single image into bbox predictions. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image. GFL head does not need this value. - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid, has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. If with_nms - is False and mlvl_score_factor is None, return mlvl_bboxes and - mlvl_scores, else return mlvl_bboxes, mlvl_scores and - mlvl_score_factor. Usually with_nms is False is used for aug - test. If with_nms is True, then return the following format - - - det_bboxes (Tensor): Predicted bboxes with shape \ - [num_bboxes, 5], where the first 4 columns are bounding \ - box positions (tl_x, tl_y, br_x, br_y) and the 5-th \ - column are scores between 0 and 1. - - det_labels (Tensor): Predicted labels of the corresponding \ - box with shape [num_bboxes]. - """ - cfg = self.test_cfg if cfg is None else cfg - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_labels = [] - for level_idx, (cls_score, bbox_pred, stride, priors) in enumerate( - zip(cls_score_list, bbox_pred_list, - self.prior_generator.strides, mlvl_priors)): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - assert stride[0] == stride[1] - - bbox_pred = bbox_pred.permute(1, 2, 0) - bbox_pred = self.integral(bbox_pred) * stride[0] - - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - results = filter_scores_and_topk( - scores, cfg.score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, _, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - bboxes = self.bbox_coder.decode( - self.anchor_center(priors), bbox_pred, max_shape=img_shape) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - return self._bbox_post_process( - mlvl_scores, - mlvl_labels, - mlvl_bboxes, - img_meta['scale_factor'], - cfg, - rescale=rescale, - with_nms=with_nms) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for GFL head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors, 4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - anchors (Tensor): All anchors in the image with shape (N, 4). - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4). - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/spaces/rorallitri/biomedical-language-models/logs/David Grossman qualcuno con cui correre ebook 117 un romanzo di avventura e amore.md b/spaces/rorallitri/biomedical-language-models/logs/David Grossman qualcuno con cui correre ebook 117 un romanzo di avventura e amore.md deleted file mode 100644 index cf2e9bc245da175928bb801fec7765f4f5f1bcfd..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/David Grossman qualcuno con cui correre ebook 117 un romanzo di avventura e amore.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  david grossman qualcuno con cui correre ebook 117


                  Download Zip ✯✯✯ https://tinurll.com/2uzmZ4



                  - - aaccfb2cb3
                  -
                  -
                  -

                  diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Lightform .rar and join the community of video mappers.md b/spaces/rorallitri/biomedical-language-models/logs/Download Lightform .rar and join the community of video mappers.md deleted file mode 100644 index 516c3ba682b6e0b27482bc5a6de4068bbf04e708..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Lightform .rar and join the community of video mappers.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Download Lightform .rar


                  Download Zip ---> https://tinurll.com/2uznsu



                  -
                  - aaccfb2cb3
                  -
                  -
                  -

                  diff --git a/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/get_downsampled_mesh_npz.py b/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/get_downsampled_mesh_npz.py deleted file mode 100644 index 039ef10d99a7261587ef4c1c345dc436b186a90f..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/get_downsampled_mesh_npz.py +++ /dev/null @@ -1,84 +0,0 @@ - -# try to use aenv_conda3 (maybe also export PYOPENGL_PLATFORM=osmesa) -# python src/graph_networks/graphcmr/get_downsampled_mesh_npz.py - -# see https://github.com/nkolot/GraphCMR/issues/35 - - -from __future__ import print_function -# import mesh_sampling -from psbody.mesh import Mesh, MeshViewer, MeshViewers -import numpy as np -import json -import os -import copy -import argparse -import pickle -import time -import sys -import trimesh - - - -sys.path.append(os.path.join(os.path.dirname(__file__), "../../../../")) -from barc_for_bite.src.graph_networks.graphcmr.pytorch_coma_mesh_operations import generate_transform_matrices -from barc_for_bite.src.configs.SMAL_configs import SMAL_MODEL_CONFIG -from barc_for_bite.src.smal_pytorch.smal_model.smal_torch_new import SMAL -# smal_model_path = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/smal_data/new_dog_models/my_smpl_00791_nadine_Jr_4_dog.pkl' - - -SMAL_MODEL_TYPE = '39dogs_diffsize' # '39dogs_diffsize' # '39dogs_norm' # 'barc' -smal_model_path = SMAL_MODEL_CONFIG[SMAL_MODEL_TYPE]['smal_model_path'] - -# data_path_root = "/is/cluster/work/nrueegg/icon_pifu_related/ICON/lib/graph_networks/graphcmr/data/" -data_path_root = "/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/src/graph_networks/graphcmr/data/" - -smal_dog_model_name = os.path.basename(smal_model_path).split('.pkl')[0] # 'my_smpl_SMBLD_nbj_v3' -suffix = "_template" -template_obj_path = data_path_root + smal_dog_model_name + suffix + ".obj" - -print("Loading smal .. ") -print(SMAL_MODEL_TYPE) -print(smal_model_path) - -smal = SMAL(smal_model_type=SMAL_MODEL_TYPE, template_name='neutral') -smal_verts = smal.v_template.detach().cpu().numpy() # (3889, 3) -smal_faces = smal.f # (7774, 3) -smal_trimesh = trimesh.base.Trimesh(vertices=smal_verts, faces=smal_faces, process=False, maintain_order=True) -smal_trimesh.export(file_obj=template_obj_path) # file_type='obj') - - -print("Loading data .. ") -reference_mesh_file = template_obj_path # 'data/barc_neutral_vertices.obj' # 'data/smpl_neutral_vertices.obj' -reference_mesh = Mesh(filename=reference_mesh_file) - -# ds_factors = [4, 4] # ds_factors = [4,1] # Sampling factor of the mesh at each stage of sampling -ds_factors = [4, 4, 4, 4] -print("Generating Transform Matrices ..") - - -# Generates adjecency matrices A, downsampling matrices D, and upsamling matrices U by sampling -# the mesh 4 times. Each time the mesh is sampled by a factor of 4 - -# M,A,D,U = mesh_sampling.generate_transform_matrices(reference_mesh, ds_factors) -M,A,D,U = generate_transform_matrices(reference_mesh, ds_factors) - -# REMARK: there is a warning: -# lib/graph_networks/graphcmr/../../../lib/graph_networks/graphcmr/pytorch_coma_mesh_operations.py:237: FutureWarning: `rcond` parameter will -# change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions. -# To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`. - - -print(type(A)) -np.savez(data_path_root + 'mesh_downsampling_' + smal_dog_model_name + suffix + '.npz', A = A, D = D, U = U) -np.savez(data_path_root + 'meshes/' + 'mesh_downsampling_meshes' + smal_dog_model_name + suffix + '.npz', M = M) - -for ind_m, my_mesh in enumerate(M): - new_suffix = '_template_downsampled' + str(ind_m) - my_mesh_tri = trimesh.Trimesh(vertices=my_mesh.v, faces=my_mesh.f, process=False, maintain_order=True) - my_mesh_tri.export(data_path_root + 'meshes/' + 'mesh_downsampling_meshes' + smal_dog_model_name + new_suffix + '.obj') - - - - - diff --git a/spaces/ryansilk/quantycs/StreamLit/quantycs/4_About Us.py b/spaces/ryansilk/quantycs/StreamLit/quantycs/4_About Us.py deleted file mode 100644 index 4291133a1d1e48dacb68d59add7b97b3ea41411a..0000000000000000000000000000000000000000 --- a/spaces/ryansilk/quantycs/StreamLit/quantycs/4_About Us.py +++ /dev/null @@ -1,98 +0,0 @@ -import pandas as pd -import streamlit as st -from pandas.api.types import ( - is_categorical_dtype, - is_datetime64_any_dtype, - is_numeric_dtype, - is_object_dtype, -) - -st.title("Auto Filter Dataframes in Streamlit") - -st.write( - """This app accomodates the blog [here](https://blog.streamlit.io/auto-generate-a-dataframe-filtering-ui-in-streamlit-with-filter_dataframe/) - and walks you through one example of how the Streamlit - Data Science Team builds add-on functions to Streamlit. - """ -) - - -def filter_dataframe(df: pd.DataFrame) -> pd.DataFrame: - """ - Adds a UI on top of a dataframe to let viewers filter columns - Args: - df (pd.DataFrame): Original dataframe - Returns: - pd.DataFrame: Filtered dataframe - """ - modify = st.checkbox("Add filters") - - if not modify: - return df - - df = df.copy() - - # Try to convert datetimes into a standard format (datetime, no timezone) - for col in df.columns: - if is_object_dtype(df[col]): - try: - df[col] = pd.to_datetime(df[col]) - except Exception: - pass - - if is_datetime64_any_dtype(df[col]): - df[col] = df[col].dt.tz_localize(None) - - modification_container = st.container() - - with modification_container: - to_filter_columns = st.multiselect("Filter dataframe on", df.columns) - for column in to_filter_columns: - left, right = st.columns((1, 20)) - left.write("↳") - # Treat columns with < 10 unique values as categorical - if is_categorical_dtype(df[column]) or df[column].nunique() < 10: - user_cat_input = right.multiselect( - f"Values for {column}", - df[column].unique(), - default=list(df[column].unique()), - ) - df = df[df[column].isin(user_cat_input)] - elif is_numeric_dtype(df[column]): - _min = float(df[column].min()) - _max = float(df[column].max()) - step = (_max - _min) / 100 - user_num_input = right.slider( - f"Values for {column}", - _min, - _max, - (_min, _max), - step=step, - ) - df = df[df[column].between(*user_num_input)] - elif is_datetime64_any_dtype(df[column]): - user_date_input = right.date_input( - f"Values for {column}", - value=( - df[column].min(), - df[column].max(), - ), - ) - if len(user_date_input) == 2: - user_date_input = tuple(map(pd.to_datetime, user_date_input)) - start_date, end_date = user_date_input - df = df.loc[df[column].between(start_date, end_date)] - else: - user_text_input = right.text_input( - f"Substring or regex in {column}", - ) - if user_text_input: - df = df[df[column].str.contains(user_text_input)] - - return df - - -df = pd.read_csv( - "https://raw.githubusercontent.com/mcnakhaee/palmerpenguins/master/palmerpenguins/data/penguins.csv" -) -st.dataframe(filter_dataframe(df)) \ No newline at end of file diff --git a/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/run_metrics.py b/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/run_metrics.py deleted file mode 100644 index 5d1597bbd4e16a2535309ea74c3559cae2a5fa53..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/run_metrics.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Main entry point for training StyleGAN and ProGAN networks.""" - -import dnnlib -from dnnlib import EasyDict -import dnnlib.tflib as tflib - -import config -from metrics import metric_base -from training import misc - -#---------------------------------------------------------------------------- - -def run_pickle(submit_config, metric_args, network_pkl, dataset_args, mirror_augment): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on network_pkl "%s"...' % (metric_args.name, network_pkl)) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - metric.run(network_pkl, dataset_args=dataset_args, mirror_augment=mirror_augment, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def run_snapshot(submit_config, metric_args, run_id, snapshot): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on run_id %s, snapshot %s...' % (metric_args.name, run_id, snapshot)) - run_dir = misc.locate_run_dir(run_id) - network_pkl = misc.locate_network_pkl(run_dir, snapshot) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - metric.run(network_pkl, run_dir=run_dir, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def run_all_snapshots(submit_config, metric_args, run_id): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on all snapshots of run_id %s...' % (metric_args.name, run_id)) - run_dir = misc.locate_run_dir(run_id) - network_pkls = misc.list_network_pkls(run_dir) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - for idx, network_pkl in enumerate(network_pkls): - ctx.update('', idx, len(network_pkls)) - metric.run(network_pkl, run_dir=run_dir, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def main(): - submit_config = dnnlib.SubmitConfig() - - # Which metrics to evaluate? - metrics = [] - metrics += [metric_base.fid50k] - #metrics += [metric_base.ppl_zfull] - #metrics += [metric_base.ppl_wfull] - #metrics += [metric_base.ppl_zend] - #metrics += [metric_base.ppl_wend] - #metrics += [metric_base.ls] - #metrics += [metric_base.dummy] - - # Which networks to evaluate them on? - tasks = [] - tasks += [EasyDict(run_func_name='run_metrics.run_pickle', network_pkl='https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ', dataset_args=EasyDict(tfrecord_dir='ffhq', shuffle_mb=0), mirror_augment=True)] # karras2019stylegan-ffhq-1024x1024.pkl - #tasks += [EasyDict(run_func_name='run_metrics.run_snapshot', run_id=100, snapshot=25000)] - #tasks += [EasyDict(run_func_name='run_metrics.run_all_snapshots', run_id=100)] - - # How many GPUs to use? - submit_config.num_gpus = 1 - #submit_config.num_gpus = 2 - #submit_config.num_gpus = 4 - #submit_config.num_gpus = 8 - - # Execute. - submit_config.run_dir_root = dnnlib.submission.submit.get_template_from_path(config.result_dir) - submit_config.run_dir_ignore += config.run_dir_ignore - for task in tasks: - for metric in metrics: - submit_config.run_desc = '%s-%s' % (task.run_func_name, metric.name) - if task.run_func_name.endswith('run_snapshot'): - submit_config.run_desc += '-%s-%s' % (task.run_id, task.snapshot) - if task.run_func_name.endswith('run_all_snapshots'): - submit_config.run_desc += '-%s' % task.run_id - submit_config.run_desc += '-%dgpu' % submit_config.num_gpus - dnnlib.submit_run(submit_config, metric_args=metric, **task) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() - -#---------------------------------------------------------------------------- diff --git a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/visualizer.py b/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/visualizer.py deleted file mode 100644 index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/GroundingDINO/groundingdino/util/visualizer.py +++ /dev/null @@ -1,318 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@File : visualizer.py -@Time : 2022/04/05 11:39:33 -@Author : Shilong Liu -@Contact : slongliu86@gmail.com -""" - -import datetime -import os - -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from matplotlib import transforms -from matplotlib.collections import PatchCollection -from matplotlib.patches import Polygon -from pycocotools import mask as maskUtils - - -def renorm( - img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] -) -> torch.FloatTensor: - # img: tensor(3,H,W) or tensor(B,3,H,W) - # return: same as img - assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim() - if img.dim() == 3: - assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % ( - img.size(0), - str(img.size()), - ) - img_perm = img.permute(1, 2, 0) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(2, 0, 1) - else: # img.dim() == 4 - assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % ( - img.size(1), - str(img.size()), - ) - img_perm = img.permute(0, 2, 3, 1) - mean = torch.Tensor(mean) - std = torch.Tensor(std) - img_res = img_perm * std + mean - return img_res.permute(0, 3, 1, 2) - - -class ColorMap: - def __init__(self, basergb=[255, 255, 0]): - self.basergb = np.array(basergb) - - def __call__(self, attnmap): - # attnmap: h, w. np.uint8. - # return: h, w, 4. np.uint8. - assert attnmap.dtype == np.uint8 - h, w = attnmap.shape - res = self.basergb.copy() - res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3 - attn1 = attnmap.copy()[..., None] # h, w, 1 - res = np.concatenate((res, attn1), axis=-1).astype(np.uint8) - return res - - -def rainbow_text(x, y, ls, lc, **kw): - """ - Take a list of strings ``ls`` and colors ``lc`` and place them next to each - other, with text ls[i] being shown in color lc[i]. - - This example shows how to do both vertical and horizontal text, and will - pass all keyword arguments to plt.text, so you can set the font size, - family, etc. - """ - t = plt.gca().transData - fig = plt.gcf() - plt.show() - - # horizontal version - for s, c in zip(ls, lc): - text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw) - text.draw(fig.canvas.get_renderer()) - ex = text.get_window_extent() - t = transforms.offset_copy(text._transform, x=ex.width, units="dots") - - # #vertical version - # for s,c in zip(ls,lc): - # text = plt.text(x,y," "+s+" ",color=c, transform=t, - # rotation=90,va='bottom',ha='center',**kw) - # text.draw(fig.canvas.get_renderer()) - # ex = text.get_window_extent() - # t = transforms.offset_copy(text._transform, y=ex.height, units='dots') - - -class COCOVisualizer: - def __init__(self, coco=None, tokenlizer=None) -> None: - self.coco = coco - - def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"): - """ - img: tensor(3, H, W) - tgt: make sure they are all on cpu. - must have items: 'image_id', 'boxes', 'size' - """ - plt.figure(dpi=dpi) - plt.rcParams["font.size"] = "5" - ax = plt.gca() - img = renorm(img).permute(1, 2, 0) - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - ax.imshow(img) - - self.addtgt(tgt) - - if tgt is None: - image_id = 0 - elif "image_id" not in tgt: - image_id = 0 - else: - image_id = tgt["image_id"] - - if caption is None: - savename = "{}/{}-{}.png".format( - savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - else: - savename = "{}/{}-{}-{}.png".format( - savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-") - ) - print("savename: {}".format(savename)) - os.makedirs(os.path.dirname(savename), exist_ok=True) - plt.savefig(savename) - plt.close() - - def addtgt(self, tgt): - """ """ - if tgt is None or not "boxes" in tgt: - ax = plt.gca() - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - - ax.set_axis_off() - return - - ax = plt.gca() - H, W = tgt["size"] - numbox = tgt["boxes"].shape[0] - - color = [] - polygons = [] - boxes = [] - for box in tgt["boxes"].cpu(): - unnormbbox = box * torch.Tensor([W, H, W, H]) - unnormbbox[:2] -= unnormbbox[2:] / 2 - [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist() - boxes.append([bbox_x, bbox_y, bbox_w, bbox_h]) - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - color.append(c) - - p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1) - ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - - if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0: - assert ( - len(tgt["strings_positive"]) == numbox - ), f"{len(tgt['strings_positive'])} = {numbox}, " - for idx, strlist in enumerate(tgt["strings_positive"]): - cate_id = int(tgt["labels"][idx]) - _string = str(cate_id) + ":" + " ".join(strlist) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "box_label" in tgt: - assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, " - for idx, bl in enumerate(tgt["box_label"]): - _string = str(bl) - bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx] - # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1}) - ax.text( - bbox_x, - bbox_y, - _string, - color="black", - bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1}, - ) - - if "caption" in tgt: - ax.set_title(tgt["caption"], wrap=True) - # plt.figure() - # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(), - # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black']) - - if "attn" in tgt: - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if isinstance(tgt["attn"], tuple): - tgt["attn"] = [tgt["attn"]] - for item in tgt["attn"]: - attn_map, basergb = item - attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3) - attn_map = (attn_map * 255).astype(np.uint8) - cm = ColorMap(basergb) - heatmap = cm(attn_map) - ax.imshow(heatmap) - ax.set_axis_off() - - def showAnns(self, anns, draw_bbox=False): - """ - Display the specified annotations. - :param anns (array of object): annotations to display - :return: None - """ - if len(anns) == 0: - return 0 - if "segmentation" in anns[0] or "keypoints" in anns[0]: - datasetType = "instances" - elif "caption" in anns[0]: - datasetType = "captions" - else: - raise Exception("datasetType not supported") - if datasetType == "instances": - ax = plt.gca() - ax.set_autoscale_on(False) - polygons = [] - color = [] - for ann in anns: - c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] - if "segmentation" in ann: - if type(ann["segmentation"]) == list: - # polygon - for seg in ann["segmentation"]: - poly = np.array(seg).reshape((int(len(seg) / 2), 2)) - polygons.append(Polygon(poly)) - color.append(c) - else: - # mask - t = self.imgs[ann["image_id"]] - if type(ann["segmentation"]["counts"]) == list: - rle = maskUtils.frPyObjects( - [ann["segmentation"]], t["height"], t["width"] - ) - else: - rle = [ann["segmentation"]] - m = maskUtils.decode(rle) - img = np.ones((m.shape[0], m.shape[1], 3)) - if ann["iscrowd"] == 1: - color_mask = np.array([2.0, 166.0, 101.0]) / 255 - if ann["iscrowd"] == 0: - color_mask = np.random.random((1, 3)).tolist()[0] - for i in range(3): - img[:, :, i] = color_mask[i] - ax.imshow(np.dstack((img, m * 0.5))) - if "keypoints" in ann and type(ann["keypoints"]) == list: - # turn skeleton into zero-based index - sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 - kp = np.array(ann["keypoints"]) - x = kp[0::3] - y = kp[1::3] - v = kp[2::3] - for sk in sks: - if np.all(v[sk] > 0): - plt.plot(x[sk], y[sk], linewidth=3, color=c) - plt.plot( - x[v > 0], - y[v > 0], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor="k", - markeredgewidth=2, - ) - plt.plot( - x[v > 1], - y[v > 1], - "o", - markersize=8, - markerfacecolor=c, - markeredgecolor=c, - markeredgewidth=2, - ) - - if draw_bbox: - [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"] - poly = [ - [bbox_x, bbox_y], - [bbox_x, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y + bbox_h], - [bbox_x + bbox_w, bbox_y], - ] - np_poly = np.array(poly).reshape((4, 2)) - polygons.append(Polygon(np_poly)) - color.append(c) - - # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4) - # ax.add_collection(p) - p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2) - ax.add_collection(p) - elif datasetType == "captions": - for ann in anns: - print(ann["caption"]) diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/transformer.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/transformer.py deleted file mode 100644 index 040c2e393ccdac50d1e41f0327d6dec8c0010fbd..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/transformer.py +++ /dev/null @@ -1,91 +0,0 @@ -import math -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import TransformerEncoder, TransformerEncoderLayer -from torch.nn.modules.transformer import MultiheadAttention, _get_activation_fn - -from utils import SeqBN - - -class TransformerModel(nn.Module): - def __init__(self, encoder, n_out, ninp, nhead, nhid, nlayers, dropout=0.0, y_encoder=None, pos_encoder=None, decoder=None, input_normalization=False): - super().__init__() - self.model_type = 'Transformer' - encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout, activation='gelu') - self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers) - self.ninp = ninp - self.encoder = encoder - self.y_encoder = y_encoder - self.pos_encoder = pos_encoder - self.decoder = decoder(ninp, nhid, n_out) if decoder is not None else nn.Sequential(nn.Linear(ninp, nhid), nn.GELU(), nn.Linear(nhid, n_out)) - self.input_ln = SeqBN(ninp) if input_normalization else None - - self.init_weights() - - @staticmethod - def generate_square_subsequent_mask(sz): - mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1) - mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) - return mask - - @staticmethod - def generate_D_q_matrix(sz, query_size): - train_size = sz-query_size - mask = torch.zeros(sz,sz) == 0 - mask[:,train_size:].zero_() - mask |= torch.eye(sz) == 1 - mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) - return mask - - def init_weights(self): - initrange = 1. - # if isinstance(self.encoder,EmbeddingEncoder): - # self.encoder.weight.data.uniform_(-initrange, initrange) - # self.decoder.bias.data.zero_() - # self.decoder.weight.data.uniform_(-initrange, initrange) - for layer in self.transformer_encoder.layers: - nn.init.zeros_(layer.linear2.weight) - nn.init.zeros_(layer.linear2.bias) - nn.init.zeros_(layer.self_attn.out_proj.weight) - nn.init.zeros_(layer.self_attn.out_proj.bias) - - def forward(self, src, src_mask=None, single_eval_pos=None): - assert single_eval_pos is not None, 'Single eval pos is required now.' - fuse_x_y = not isinstance(src, tuple) - assert not(fuse_x_y and single_eval_pos is not None), \ - 'Don\'t use both fuxe_x_y and single_eval_pos (permutation equivariant setup) at the same time.' - if src_mask is None: - x_src = src if fuse_x_y else src[0] - if single_eval_pos is None: - src_mask = self.generate_square_subsequent_mask(len(x_src) if fuse_x_y else 2*len(x_src)).to(x_src.device) - else: - src_mask = self.generate_D_q_matrix(len(x_src), len(x_src)-single_eval_pos).to(x_src.device) - if not fuse_x_y: - x_src, y_src = src - x_src = self.encoder(x_src) - y_src = self.y_encoder(y_src.unsqueeze(-1)) - if single_eval_pos is None: - src = torch.stack([x_src, y_src], 1).view(-1, *x_src.shape[1:]) - else: - train_x = x_src[:single_eval_pos] + y_src[:single_eval_pos] - src = torch.cat([train_x, x_src[single_eval_pos:]], 0) - else: - src = self.encoder(src) - - if self.input_ln is not None: - src = self.input_ln(src) - - if self.pos_encoder is not None: - src = self.pos_encoder(src) - - output = self.transformer_encoder(src, src_mask) - output = self.decoder(output) - if fuse_x_y: - return output - elif single_eval_pos is None: - return output[0::2] - else: - return output[single_eval_pos:] diff --git a/spaces/scedlatioru/img-to-music/Handycafe-V-1116-TOP-Crackrar.md b/spaces/scedlatioru/img-to-music/Handycafe-V-1116-TOP-Crackrar.md deleted file mode 100644 index d9551c89215dde253ba1228769d21eba39ddb1db..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/Handycafe-V-1116-TOP-Crackrar.md +++ /dev/null @@ -1,45 +0,0 @@ -Handycafe V 1.1.16 Crack.rar - - - -Handycafe V 1.1.16 Crack.rar ===== [https://ekporriola.blogspot.com/?c=2tvDPU](https://ekporriola.blogspot.com/?c=2tvDPU) - - - - - - - - - -How to Download and Install Handycafe V 1.1.16 Crack.rar -Handycafe V 1.1.16 Crack.rar is a file that contains a cracked version of Handycafe, a software that allows you to manage your internet cafe business. Handycafe is a popular and reliable software that has many features, such as billing, monitoring, security, reporting, and more. However, it is not free and requires a license key to activate. Some users may try to download and install Handycafe V 1.1.16 Crack.rar to bypass the license verification and use Handycafe for free. -However, this is not recommended for several reasons. First of all, downloading and installing Handycafe V 1.1.16 Crack.rar may be illegal and violate the terms of service of Handycafe. You may face legal consequences if you are caught using a cracked version of Handycafe. Second, downloading and installing Handycafe V 1.1.16 Crack.rar may be unsafe and expose your computer to malware, viruses, or other threats. You may compromise your data and privacy if you download and install Handycafe V 1.1.16 Crack.rar from untrusted sources. Third, downloading and installing Handycafe V 1.1.16 Crack.rar may not work properly and cause errors or problems with your system or network. You may experience crashes, glitches, or compatibility issues if you use a cracked version of Handycafe. -Therefore, it is better to avoid downloading and installing Handycafe V 1.1.16 Crack.rar and instead purchase a legitimate license key from the official website of Handycafe[^1^]. This way, you can enjoy the full features and benefits of Handycafe without any risks or troubles. -If you still want to download and install Handycafe V 1.1.16 Crack.rar, here are the steps you need to follow: - -Download Handycafe V 1.1.16 Crack.rar from a source that claims to provide it, such as [^2^] [^3^] [^4^] [^5^]. Be careful and scan the file for any malware or viruses before opening it. -Extract the file using a program like WinRAR or 7-Zip. -Run the setup.exe file and follow the instructions to install Handycafe on your computer. -Copy the crack file from the extracted folder and paste it into the installation directory of Handycafe. -Run Handycafe and enjoy using it for free. - -Note: This article is for informational purposes only and does not endorse or encourage downloading and installing Handycafe V 1.1.16 Crack.rar or any other cracked software.Here are some additional paragraphs for the article: -How to Use Handycafe for Your Internet Cafe Business -Once you have downloaded and installed Handycafe on your computer, you can use it to manage your internet cafe business. Handycafe has many features that can help you with your daily operations, such as: - -Billing: You can set up different pricing schemes for your customers, such as time-based, volume-based, or prepaid. You can also create discounts, coupons, or loyalty programs to attract more customers. You can also generate invoices and receipts for your transactions. -Monitoring: You can monitor the activities of your customers and employees on your network. You can see what websites they are visiting, what applications they are using, how much bandwidth they are consuming, and how long they are staying. You can also block or limit access to certain websites or applications that are inappropriate or harmful. -Security: You can protect your network and data from unauthorized access or attacks. You can encrypt your traffic, use firewalls, antivirus, or anti-spyware software, and backup your data regularly. You can also restrict access to certain settings or functions of Handycafe to prevent tampering or misuse. -Reporting: You can generate various reports and statistics for your business, such as sales, income, expenses, inventory, customer behavior, employee performance, and more. You can also export or print your reports for further analysis or presentation. - -How to Get Support and Updates for Handycafe -If you encounter any problems or issues with Handycafe, you can get support and updates from the official website of Handycafe. You can find the following resources on the website: - -FAQ: You can find answers to frequently asked questions about Handycafe, such as installation, activation, configuration, troubleshooting, and more. -Forum: You can join the online community of Handycafe users and share your experiences, tips, feedback, or suggestions. You can also ask questions and get answers from other users or moderators. -Contact: You can contact the customer service team of Handycafe via email or phone and get assistance for your specific issues or inquiries. -Download: You can download the latest version of Handycafe or any patches or updates that are available. You can also download additional tools or add-ons that can enhance the functionality of Handycafe. dfd1c89656 - - - diff --git a/spaces/scedlatioru/img-to-music/example/4yo 5yo 6yo 7yo 8yo 9yo Loli HOT.md b/spaces/scedlatioru/img-to-music/example/4yo 5yo 6yo 7yo 8yo 9yo Loli HOT.md deleted file mode 100644 index a34aa43c1ea56903e0c3fd0acb73be507e8f65c1..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/4yo 5yo 6yo 7yo 8yo 9yo Loli HOT.md +++ /dev/null @@ -1,30 +0,0 @@ -

                  4yo 5yo 6yo 7yo 8yo 9yo loli


                  DOWNLOAD ••• https://gohhs.com/2uEyUs



                  - -alicorn boy alicorn girl alicorn family alicorn family plush toy This is the brand new I Love You, Big Bang plush toy. This is the Yull, who is the third member of the Big Bang. The 9 in the number is the Pearl of the Yull. If you really love the Pearl, you can buy the three of them. The three of them are not too expensive. You can buy them around 5k yen. I bought the Pearl, and now I'm going to release her too. Look forward to it. - - Hello! - - TestBrowser - - Load - - Try different URLs - - URL - -1. Field of the Invention - -The present invention relates generally to a preform for a plastic optical fiber, and more particularly to a method of improving the performance of a preform for a plastic optical fiber. - -2. Description of the Related Art - -Optical fibers are used for long-distance communication due to their high speed and large capacity of transmitting information. Currently, optical fibers are mainly used in optical communications of telecommunications, optical supervisory and the like. In addition, in recent years, the applications of optical fibers in industries have been more and more active. - -Optical fibers are usually formed by drawing a molten polymer. Generally, it is easy to achieve satisfactory performance if a small optical fiber is used. However, in some applications, a large diameter optical fiber is needed, and the performance of the optical fiber is influenced by several factors, such as the core size, the difference of core and cladding diameters, the difference of core refractive index and the like. - -The conventional technology for manufacturing an optical fiber is to use a cylindrical drawing die, which comprises a preform, a heater and an outer sheath. The preform is formed by encapsulating a core material with a clad material. The core material and the clad material are usually polymers or glasses, and the core and the clad materials are needed to be mutually matched in a predetermined ratio. - -In recent years, the core diameter of the optical fiber 4fefd39f24
                  -
                  -
                  -

                  diff --git a/spaces/scedlatioru/img-to-music/example/Cubase 5 _HOT_ Download.md b/spaces/scedlatioru/img-to-music/example/Cubase 5 _HOT_ Download.md deleted file mode 100644 index eefaff7699f7bbe4c6de1c5e73a24e06aef3a465..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Cubase 5 _HOT_ Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Cubase 5 download


                  DOWNLOAD ○○○ https://gohhs.com/2uEzet



                  -
                  -Download Cubase 5 New Version 2021 . Link -- ..."Cubase 5" is a full-featured recording, mixing, mastering, mixing and playback software that comes in 32-bit and 64-bit versions, each including a free recording package. This also includes free tools - "Plug-in Editor Pro" and "Lesser Monkey Ensemble Pro". 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/scedlatioru/img-to-music/example/DownloadmovieHarryPotterAndTheDeathlyHallowsPart2inhindihd(1).md b/spaces/scedlatioru/img-to-music/example/DownloadmovieHarryPotterAndTheDeathlyHallowsPart2inhindihd(1).md deleted file mode 100644 index 89b35b7966f004d059a1b785d7ebdfb8f627aa2b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/DownloadmovieHarryPotterAndTheDeathlyHallowsPart2inhindihd(1).md +++ /dev/null @@ -1,6 +0,0 @@ -

                  downloadmovieHarryPotterAndTheDeathlyHallowsPart2inhindihd(1)


                  Download File >>> https://gohhs.com/2uEzOE



                  - -DownloadmovieHarryPotterAndTheDeathlyHallowsPart2inhindihd(1) ->->->-> DOWNLOAD 608fcfdb5b Complete Guide to High Dynamic Range Digital ... 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/scedlatioru/img-to-music/example/Virtual Girl Hd 2012 Full Crack.md b/spaces/scedlatioru/img-to-music/example/Virtual Girl Hd 2012 Full Crack.md deleted file mode 100644 index ea050fc3ecb1f542605e8730fa7529d6cbfae388..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Virtual Girl Hd 2012 Full Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  virtual girl hd 2012 full crack


                  Download File ===> https://gohhs.com/2uEzkt



                  - -Virtual Girl HD FULL (strips completos) by VGHD COMPLETA ... VGHD COMPLETA Uploaded 7 years ago 2012-11-15. baixe o vghd ... 0:00. 1. virtual girl & dance music editing fun ... 5. Virtuagirl full Cracked Free Credits Credits Software. 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/sczhou/ProPainter/core/metrics.py b/spaces/sczhou/ProPainter/core/metrics.py deleted file mode 100644 index d0dfb73f1d09a249f801770eada5e133c8148df2..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/core/metrics.py +++ /dev/null @@ -1,569 +0,0 @@ -import numpy as np -from skimage import measure -from scipy import linalg - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from core.utils import to_tensors - - -def calculate_epe(flow1, flow2): - """Calculate End point errors.""" - - epe = torch.sum((flow1 - flow2)**2, dim=1).sqrt() - epe = epe.view(-1) - return epe.mean().item() - - -def calculate_psnr(img1, img2): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - Args: - img1 (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - Returns: - float: psnr result. - """ - - assert img1.shape == img2.shape, \ - (f'Image shapes are differnet: {img1.shape}, {img2.shape}.') - - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20. * np.log10(255. / np.sqrt(mse)) - - -def calc_psnr_and_ssim(img1, img2): - """Calculate PSNR and SSIM for images. - img1: ndarray, range [0, 255] - img2: ndarray, range [0, 255] - """ - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - - psnr = calculate_psnr(img1, img2) - ssim = measure.compare_ssim(img1, - img2, - data_range=255, - multichannel=True, - win_size=65) - - return psnr, ssim - - -########################### -# I3D models -########################### - - -def init_i3d_model(i3d_model_path): - print(f"[Loading I3D model from {i3d_model_path} for FID score ..]") - i3d_model = InceptionI3d(400, in_channels=3, final_endpoint='Logits') - i3d_model.load_state_dict(torch.load(i3d_model_path)) - i3d_model.to(torch.device('cuda:0')) - return i3d_model - - -def calculate_i3d_activations(video1, video2, i3d_model, device): - """Calculate VFID metric. - video1: list[PIL.Image] - video2: list[PIL.Image] - """ - video1 = to_tensors()(video1).unsqueeze(0).to(device) - video2 = to_tensors()(video2).unsqueeze(0).to(device) - video1_activations = get_i3d_activations( - video1, i3d_model).cpu().numpy().flatten() - video2_activations = get_i3d_activations( - video2, i3d_model).cpu().numpy().flatten() - - return video1_activations, video2_activations - - -def calculate_vfid(real_activations, fake_activations): - """ - Given two distribution of features, compute the FID score between them - Params: - real_activations: list[ndarray] - fake_activations: list[ndarray] - """ - m1 = np.mean(real_activations, axis=0) - m2 = np.mean(fake_activations, axis=0) - s1 = np.cov(real_activations, rowvar=False) - s2 = np.cov(fake_activations, rowvar=False) - return calculate_frechet_distance(m1, s1, m2, s2) - - -def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6): - """Numpy implementation of the Frechet Distance. - The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1) - and X_2 ~ N(mu_2, C_2) is - d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)). - Stable version by Dougal J. Sutherland. - Params: - -- mu1 : Numpy array containing the activations of a layer of the - inception net (like returned by the function 'get_predictions') - for generated samples. - -- mu2 : The sample mean over activations, precalculated on an - representive data set. - -- sigma1: The covariance matrix over activations for generated samples. - -- sigma2: The covariance matrix over activations, precalculated on an - representive data set. - Returns: - -- : The Frechet Distance. - """ - - mu1 = np.atleast_1d(mu1) - mu2 = np.atleast_1d(mu2) - - sigma1 = np.atleast_2d(sigma1) - sigma2 = np.atleast_2d(sigma2) - - assert mu1.shape == mu2.shape, \ - 'Training and test mean vectors have different lengths' - assert sigma1.shape == sigma2.shape, \ - 'Training and test covariances have different dimensions' - - diff = mu1 - mu2 - - # Product might be almost singular - covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False) - if not np.isfinite(covmean).all(): - msg = ('fid calculation produces singular product; ' - 'adding %s to diagonal of cov estimates') % eps - print(msg) - offset = np.eye(sigma1.shape[0]) * eps - covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset)) - - # Numerical error might give slight imaginary component - if np.iscomplexobj(covmean): - if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3): - m = np.max(np.abs(covmean.imag)) - raise ValueError('Imaginary component {}'.format(m)) - covmean = covmean.real - - tr_covmean = np.trace(covmean) - - return (diff.dot(diff) + np.trace(sigma1) + # NOQA - np.trace(sigma2) - 2 * tr_covmean) - - -def get_i3d_activations(batched_video, - i3d_model, - target_endpoint='Logits', - flatten=True, - grad_enabled=False): - """ - Get features from i3d model and flatten them to 1d feature, - valid target endpoints are defined in InceptionI3d.VALID_ENDPOINTS - VALID_ENDPOINTS = ( - 'Conv3d_1a_7x7', - 'MaxPool3d_2a_3x3', - 'Conv3d_2b_1x1', - 'Conv3d_2c_3x3', - 'MaxPool3d_3a_3x3', - 'Mixed_3b', - 'Mixed_3c', - 'MaxPool3d_4a_3x3', - 'Mixed_4b', - 'Mixed_4c', - 'Mixed_4d', - 'Mixed_4e', - 'Mixed_4f', - 'MaxPool3d_5a_2x2', - 'Mixed_5b', - 'Mixed_5c', - 'Logits', - 'Predictions', - ) - """ - with torch.set_grad_enabled(grad_enabled): - feat = i3d_model.extract_features(batched_video.transpose(1, 2), - target_endpoint) - if flatten: - feat = feat.view(feat.size(0), -1) - - return feat - - -# This code is from https://github.com/piergiaj/pytorch-i3d/blob/master/pytorch_i3d.py -# I only fix flake8 errors and do some cleaning here - - -class MaxPool3dSamePadding(nn.MaxPool3d): - def compute_pad(self, dim, s): - if s % self.stride[dim] == 0: - return max(self.kernel_size[dim] - self.stride[dim], 0) - else: - return max(self.kernel_size[dim] - (s % self.stride[dim]), 0) - - def forward(self, x): - # compute 'same' padding - (batch, channel, t, h, w) = x.size() - pad_t = self.compute_pad(0, t) - pad_h = self.compute_pad(1, h) - pad_w = self.compute_pad(2, w) - - pad_t_f = pad_t // 2 - pad_t_b = pad_t - pad_t_f - pad_h_f = pad_h // 2 - pad_h_b = pad_h - pad_h_f - pad_w_f = pad_w // 2 - pad_w_b = pad_w - pad_w_f - - pad = (pad_w_f, pad_w_b, pad_h_f, pad_h_b, pad_t_f, pad_t_b) - x = F.pad(x, pad) - return super(MaxPool3dSamePadding, self).forward(x) - - -class Unit3D(nn.Module): - def __init__(self, - in_channels, - output_channels, - kernel_shape=(1, 1, 1), - stride=(1, 1, 1), - padding=0, - activation_fn=F.relu, - use_batch_norm=True, - use_bias=False, - name='unit_3d'): - """Initializes Unit3D module.""" - super(Unit3D, self).__init__() - - self._output_channels = output_channels - self._kernel_shape = kernel_shape - self._stride = stride - self._use_batch_norm = use_batch_norm - self._activation_fn = activation_fn - self._use_bias = use_bias - self.name = name - self.padding = padding - - self.conv3d = nn.Conv3d( - in_channels=in_channels, - out_channels=self._output_channels, - kernel_size=self._kernel_shape, - stride=self._stride, - padding=0, # we always want padding to be 0 here. We will - # dynamically pad based on input size in forward function - bias=self._use_bias) - - if self._use_batch_norm: - self.bn = nn.BatchNorm3d(self._output_channels, - eps=0.001, - momentum=0.01) - - def compute_pad(self, dim, s): - if s % self._stride[dim] == 0: - return max(self._kernel_shape[dim] - self._stride[dim], 0) - else: - return max(self._kernel_shape[dim] - (s % self._stride[dim]), 0) - - def forward(self, x): - # compute 'same' padding - (batch, channel, t, h, w) = x.size() - pad_t = self.compute_pad(0, t) - pad_h = self.compute_pad(1, h) - pad_w = self.compute_pad(2, w) - - pad_t_f = pad_t // 2 - pad_t_b = pad_t - pad_t_f - pad_h_f = pad_h // 2 - pad_h_b = pad_h - pad_h_f - pad_w_f = pad_w // 2 - pad_w_b = pad_w - pad_w_f - - pad = (pad_w_f, pad_w_b, pad_h_f, pad_h_b, pad_t_f, pad_t_b) - x = F.pad(x, pad) - - x = self.conv3d(x) - if self._use_batch_norm: - x = self.bn(x) - if self._activation_fn is not None: - x = self._activation_fn(x) - return x - - -class InceptionModule(nn.Module): - def __init__(self, in_channels, out_channels, name): - super(InceptionModule, self).__init__() - - self.b0 = Unit3D(in_channels=in_channels, - output_channels=out_channels[0], - kernel_shape=[1, 1, 1], - padding=0, - name=name + '/Branch_0/Conv3d_0a_1x1') - self.b1a = Unit3D(in_channels=in_channels, - output_channels=out_channels[1], - kernel_shape=[1, 1, 1], - padding=0, - name=name + '/Branch_1/Conv3d_0a_1x1') - self.b1b = Unit3D(in_channels=out_channels[1], - output_channels=out_channels[2], - kernel_shape=[3, 3, 3], - name=name + '/Branch_1/Conv3d_0b_3x3') - self.b2a = Unit3D(in_channels=in_channels, - output_channels=out_channels[3], - kernel_shape=[1, 1, 1], - padding=0, - name=name + '/Branch_2/Conv3d_0a_1x1') - self.b2b = Unit3D(in_channels=out_channels[3], - output_channels=out_channels[4], - kernel_shape=[3, 3, 3], - name=name + '/Branch_2/Conv3d_0b_3x3') - self.b3a = MaxPool3dSamePadding(kernel_size=[3, 3, 3], - stride=(1, 1, 1), - padding=0) - self.b3b = Unit3D(in_channels=in_channels, - output_channels=out_channels[5], - kernel_shape=[1, 1, 1], - padding=0, - name=name + '/Branch_3/Conv3d_0b_1x1') - self.name = name - - def forward(self, x): - b0 = self.b0(x) - b1 = self.b1b(self.b1a(x)) - b2 = self.b2b(self.b2a(x)) - b3 = self.b3b(self.b3a(x)) - return torch.cat([b0, b1, b2, b3], dim=1) - - -class InceptionI3d(nn.Module): - """Inception-v1 I3D architecture. - The model is introduced in: - Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset - Joao Carreira, Andrew Zisserman - https://arxiv.org/pdf/1705.07750v1.pdf. - See also the Inception architecture, introduced in: - Going deeper with convolutions - Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, - Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. - http://arxiv.org/pdf/1409.4842v1.pdf. - """ - - # Endpoints of the model in order. During construction, all the endpoints up - # to a designated `final_endpoint` are returned in a dictionary as the - # second return value. - VALID_ENDPOINTS = ( - 'Conv3d_1a_7x7', - 'MaxPool3d_2a_3x3', - 'Conv3d_2b_1x1', - 'Conv3d_2c_3x3', - 'MaxPool3d_3a_3x3', - 'Mixed_3b', - 'Mixed_3c', - 'MaxPool3d_4a_3x3', - 'Mixed_4b', - 'Mixed_4c', - 'Mixed_4d', - 'Mixed_4e', - 'Mixed_4f', - 'MaxPool3d_5a_2x2', - 'Mixed_5b', - 'Mixed_5c', - 'Logits', - 'Predictions', - ) - - def __init__(self, - num_classes=400, - spatial_squeeze=True, - final_endpoint='Logits', - name='inception_i3d', - in_channels=3, - dropout_keep_prob=0.5): - """Initializes I3D model instance. - Args: - num_classes: The number of outputs in the logit layer (default 400, which - matches the Kinetics dataset). - spatial_squeeze: Whether to squeeze the spatial dimensions for the logits - before returning (default True). - final_endpoint: The model contains many possible endpoints. - `final_endpoint` specifies the last endpoint for the model to be built - up to. In addition to the output at `final_endpoint`, all the outputs - at endpoints up to `final_endpoint` will also be returned, in a - dictionary. `final_endpoint` must be one of - InceptionI3d.VALID_ENDPOINTS (default 'Logits'). - name: A string (optional). The name of this module. - Raises: - ValueError: if `final_endpoint` is not recognized. - """ - - if final_endpoint not in self.VALID_ENDPOINTS: - raise ValueError('Unknown final endpoint %s' % final_endpoint) - - super(InceptionI3d, self).__init__() - self._num_classes = num_classes - self._spatial_squeeze = spatial_squeeze - self._final_endpoint = final_endpoint - self.logits = None - - if self._final_endpoint not in self.VALID_ENDPOINTS: - raise ValueError('Unknown final endpoint %s' % - self._final_endpoint) - - self.end_points = {} - end_point = 'Conv3d_1a_7x7' - self.end_points[end_point] = Unit3D(in_channels=in_channels, - output_channels=64, - kernel_shape=[7, 7, 7], - stride=(2, 2, 2), - padding=(3, 3, 3), - name=name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'MaxPool3d_2a_3x3' - self.end_points[end_point] = MaxPool3dSamePadding( - kernel_size=[1, 3, 3], stride=(1, 2, 2), padding=0) - if self._final_endpoint == end_point: - return - - end_point = 'Conv3d_2b_1x1' - self.end_points[end_point] = Unit3D(in_channels=64, - output_channels=64, - kernel_shape=[1, 1, 1], - padding=0, - name=name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Conv3d_2c_3x3' - self.end_points[end_point] = Unit3D(in_channels=64, - output_channels=192, - kernel_shape=[3, 3, 3], - padding=1, - name=name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'MaxPool3d_3a_3x3' - self.end_points[end_point] = MaxPool3dSamePadding( - kernel_size=[1, 3, 3], stride=(1, 2, 2), padding=0) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_3b' - self.end_points[end_point] = InceptionModule(192, - [64, 96, 128, 16, 32, 32], - name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_3c' - self.end_points[end_point] = InceptionModule( - 256, [128, 128, 192, 32, 96, 64], name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'MaxPool3d_4a_3x3' - self.end_points[end_point] = MaxPool3dSamePadding( - kernel_size=[3, 3, 3], stride=(2, 2, 2), padding=0) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_4b' - self.end_points[end_point] = InceptionModule( - 128 + 192 + 96 + 64, [192, 96, 208, 16, 48, 64], name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_4c' - self.end_points[end_point] = InceptionModule( - 192 + 208 + 48 + 64, [160, 112, 224, 24, 64, 64], name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_4d' - self.end_points[end_point] = InceptionModule( - 160 + 224 + 64 + 64, [128, 128, 256, 24, 64, 64], name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_4e' - self.end_points[end_point] = InceptionModule( - 128 + 256 + 64 + 64, [112, 144, 288, 32, 64, 64], name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_4f' - self.end_points[end_point] = InceptionModule( - 112 + 288 + 64 + 64, [256, 160, 320, 32, 128, 128], - name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'MaxPool3d_5a_2x2' - self.end_points[end_point] = MaxPool3dSamePadding( - kernel_size=[2, 2, 2], stride=(2, 2, 2), padding=0) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_5b' - self.end_points[end_point] = InceptionModule( - 256 + 320 + 128 + 128, [256, 160, 320, 32, 128, 128], - name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Mixed_5c' - self.end_points[end_point] = InceptionModule( - 256 + 320 + 128 + 128, [384, 192, 384, 48, 128, 128], - name + end_point) - if self._final_endpoint == end_point: - return - - end_point = 'Logits' - self.avg_pool = nn.AvgPool3d(kernel_size=[2, 7, 7], stride=(1, 1, 1)) - self.dropout = nn.Dropout(dropout_keep_prob) - self.logits = Unit3D(in_channels=384 + 384 + 128 + 128, - output_channels=self._num_classes, - kernel_shape=[1, 1, 1], - padding=0, - activation_fn=None, - use_batch_norm=False, - use_bias=True, - name='logits') - - self.build() - - def replace_logits(self, num_classes): - self._num_classes = num_classes - self.logits = Unit3D(in_channels=384 + 384 + 128 + 128, - output_channels=self._num_classes, - kernel_shape=[1, 1, 1], - padding=0, - activation_fn=None, - use_batch_norm=False, - use_bias=True, - name='logits') - - def build(self): - for k in self.end_points.keys(): - self.add_module(k, self.end_points[k]) - - def forward(self, x): - for end_point in self.VALID_ENDPOINTS: - if end_point in self.end_points: - x = self._modules[end_point]( - x) # use _modules to work with dataparallel - - x = self.logits(self.dropout(self.avg_pool(x))) - if self._spatial_squeeze: - logits = x.squeeze(3).squeeze(3) - # logits is batch X time X classes, which is what we want to work with - return logits - - def extract_features(self, x, target_endpoint='Logits'): - for end_point in self.VALID_ENDPOINTS: - if end_point in self.end_points: - x = self._modules[end_point](x) - if end_point == target_endpoint: - break - if target_endpoint == 'Logits': - return x.mean(4).mean(3).mean(2) - else: - return x diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/feature_transform.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/feature_transform.py deleted file mode 100644 index 700f63fdd0831e4cc20a12ecde7f4c9bd360ca4c..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/frontends/feature_transform.py +++ /dev/null @@ -1,263 +0,0 @@ -from typing import List -from typing import Tuple -from typing import Union - -import librosa -import numpy as np -import torch -from torch_complex.tensor import ComplexTensor - -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask - - -class FeatureTransform(torch.nn.Module): - def __init__( - self, - # Mel options, - fs: int = 16000, - n_fft: int = 512, - n_mels: int = 80, - fmin: float = 0.0, - fmax: float = None, - # Normalization - stats_file: str = None, - apply_uttmvn: bool = True, - uttmvn_norm_means: bool = True, - uttmvn_norm_vars: bool = False, - ): - super().__init__() - self.apply_uttmvn = apply_uttmvn - - self.logmel = LogMel(fs=fs, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.stats_file = stats_file - if stats_file is not None: - self.global_mvn = GlobalMVN(stats_file) - else: - self.global_mvn = None - - if self.apply_uttmvn is not None: - self.uttmvn = UtteranceMVN( - norm_means=uttmvn_norm_means, norm_vars=uttmvn_norm_vars - ) - else: - self.uttmvn = None - - def forward( - self, x: ComplexTensor, ilens: Union[torch.LongTensor, np.ndarray, List[int]] - ) -> Tuple[torch.Tensor, torch.LongTensor]: - # (B, T, F) or (B, T, C, F) - if x.dim() not in (3, 4): - raise ValueError(f"Input dim must be 3 or 4: {x.dim()}") - if not torch.is_tensor(ilens): - ilens = torch.from_numpy(np.asarray(ilens)).to(x.device) - - if x.dim() == 4: - # h: (B, T, C, F) -> h: (B, T, F) - if self.training: - # Select 1ch randomly - ch = np.random.randint(x.size(2)) - h = x[:, :, ch, :] - else: - # Use the first channel - h = x[:, :, 0, :] - else: - h = x - - # h: ComplexTensor(B, T, F) -> torch.Tensor(B, T, F) - h = h.real ** 2 + h.imag ** 2 - - h, _ = self.logmel(h, ilens) - if self.stats_file is not None: - h, _ = self.global_mvn(h, ilens) - if self.apply_uttmvn: - h, _ = self.uttmvn(h, ilens) - - return h, ilens - - -class LogMel(torch.nn.Module): - """Convert STFT to fbank feats - - The arguments is same as librosa.filters.mel - - Args: - fs: number > 0 [scalar] sampling rate of the incoming signal - n_fft: int > 0 [scalar] number of FFT components - n_mels: int > 0 [scalar] number of Mel bands to generate - fmin: float >= 0 [scalar] lowest frequency (in Hz) - fmax: float >= 0 [scalar] highest frequency (in Hz). - If `None`, use `fmax = fs / 2.0` - htk: use HTK formula instead of Slaney - norm: {None, 1, np.inf} [scalar] - if 1, divide the triangular mel weights by the width of the mel band - (area normalization). Otherwise, leave all the triangles aiming for - a peak value of 1.0 - - """ - - def __init__( - self, - fs: int = 16000, - n_fft: int = 512, - n_mels: int = 80, - fmin: float = 0.0, - fmax: float = None, - htk: bool = False, - norm=1, - ): - super().__init__() - - _mel_options = dict( - sr=fs, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax, htk=htk, norm=norm - ) - self.mel_options = _mel_options - - # Note(kamo): The mel matrix of librosa is different from kaldi. - melmat = librosa.filters.mel(**_mel_options) - # melmat: (D2, D1) -> (D1, D2) - self.register_buffer("melmat", torch.from_numpy(melmat.T).float()) - - def extra_repr(self): - return ", ".join(f"{k}={v}" for k, v in self.mel_options.items()) - - def forward( - self, feat: torch.Tensor, ilens: torch.LongTensor - ) -> Tuple[torch.Tensor, torch.LongTensor]: - # feat: (B, T, D1) x melmat: (D1, D2) -> mel_feat: (B, T, D2) - mel_feat = torch.matmul(feat, self.melmat) - - logmel_feat = (mel_feat + 1e-20).log() - # Zero padding - logmel_feat = logmel_feat.masked_fill(make_pad_mask(ilens, logmel_feat, 1), 0.0) - return logmel_feat, ilens - - -class GlobalMVN(torch.nn.Module): - """Apply global mean and variance normalization - - Args: - stats_file(str): npy file of 1-dim array or text file. - From the _first element to - the {(len(array) - 1) / 2}th element are treated as - the sum of features, - and the rest excluding the last elements are - treated as the sum of the square value of features, - and the last elements eqauls to the number of samples. - std_floor(float): - """ - - def __init__( - self, - stats_file: str, - norm_means: bool = True, - norm_vars: bool = True, - eps: float = 1.0e-20, - ): - super().__init__() - self.norm_means = norm_means - self.norm_vars = norm_vars - - self.stats_file = stats_file - stats = np.load(stats_file) - - stats = stats.astype(float) - assert (len(stats) - 1) % 2 == 0, stats.shape - - count = stats.flatten()[-1] - mean = stats[: (len(stats) - 1) // 2] / count - var = stats[(len(stats) - 1) // 2 : -1] / count - mean * mean - std = np.maximum(np.sqrt(var), eps) - - self.register_buffer("bias", torch.from_numpy(-mean.astype(np.float32))) - self.register_buffer("scale", torch.from_numpy(1 / std.astype(np.float32))) - - def extra_repr(self): - return ( - f"stats_file={self.stats_file}, " - f"norm_means={self.norm_means}, norm_vars={self.norm_vars}" - ) - - def forward( - self, x: torch.Tensor, ilens: torch.LongTensor - ) -> Tuple[torch.Tensor, torch.LongTensor]: - # feat: (B, T, D) - if self.norm_means: - x += self.bias.type_as(x) - x.masked_fill(make_pad_mask(ilens, x, 1), 0.0) - - if self.norm_vars: - x *= self.scale.type_as(x) - return x, ilens - - -class UtteranceMVN(torch.nn.Module): - def __init__( - self, norm_means: bool = True, norm_vars: bool = False, eps: float = 1.0e-20 - ): - super().__init__() - self.norm_means = norm_means - self.norm_vars = norm_vars - self.eps = eps - - def extra_repr(self): - return f"norm_means={self.norm_means}, norm_vars={self.norm_vars}" - - def forward( - self, x: torch.Tensor, ilens: torch.LongTensor - ) -> Tuple[torch.Tensor, torch.LongTensor]: - return utterance_mvn( - x, ilens, norm_means=self.norm_means, norm_vars=self.norm_vars, eps=self.eps - ) - - -def utterance_mvn( - x: torch.Tensor, - ilens: torch.LongTensor, - norm_means: bool = True, - norm_vars: bool = False, - eps: float = 1.0e-20, -) -> Tuple[torch.Tensor, torch.LongTensor]: - """Apply utterance mean and variance normalization - - Args: - x: (B, T, D), assumed zero padded - ilens: (B, T, D) - norm_means: - norm_vars: - eps: - - """ - ilens_ = ilens.type_as(x) - # mean: (B, D) - mean = x.sum(dim=1) / ilens_[:, None] - - if norm_means: - x -= mean[:, None, :] - x_ = x - else: - x_ = x - mean[:, None, :] - - # Zero padding - x_.masked_fill(make_pad_mask(ilens, x_, 1), 0.0) - if norm_vars: - var = x_.pow(2).sum(dim=1) / ilens_[:, None] - var = torch.clamp(var, min=eps) - x /= var.sqrt()[:, None, :] - x_ = x - return x_, ilens - - -def feature_transform_for(args, n_fft): - return FeatureTransform( - # Mel options, - fs=args.fbank_fs, - n_fft=n_fft, - n_mels=args.n_mels, - fmin=args.fbank_fmin, - fmax=args.fbank_fmax, - # Normalization - stats_file=args.stats_file, - apply_uttmvn=args.apply_uttmvn, - uttmvn_norm_means=args.uttmvn_norm_means, - uttmvn_norm_vars=args.uttmvn_norm_vars, - ) diff --git a/spaces/senger/AI-Text-Generator/style.css b/spaces/senger/AI-Text-Generator/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/senger/AI-Text-Generator/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/sharjeel1477/Brain/ask.py b/spaces/sharjeel1477/Brain/ask.py deleted file mode 100644 index 7b3dc840d627717306d5a03701b6cbb2a02da332..0000000000000000000000000000000000000000 --- a/spaces/sharjeel1477/Brain/ask.py +++ /dev/null @@ -1,103 +0,0 @@ -from llama_index import GPTPineconeIndex, LLMPredictor, ServiceContext -import pinecone -from langchain import OpenAI -import os -from llama_index.langchain_helpers.agents import IndexToolConfig, LlamaIndexTool, LlamaToolkit, create_llama_chat_agent -from langchain.chains.conversation.memory import ConversationBufferMemory -from llama_index import QuestionAnswerPrompt - - -# logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) -pinecone_key=os.environ['PINECONE_KEY'] - -def askQuestion(brain, question, prompt, temperature, maxTokens): - temperature = float(temperature) - finalQuestion = prompt+question - print(finalQuestion) - print(temperature, maxTokens) - #print(type(temperature)) - #print(type(maxTokens)) - Brain_Name = brain.lower() - print(Brain_Name) - pinecone.init(api_key=pinecone_key, - environment="us-west4-gcp") - pineconeindex = pinecone.Index(Brain_Name) - pineconeindex.describe_index_stats - index = GPTPineconeIndex([], pinecone_index=pineconeindex) - # index = GPTSimpleVectorIndex.load_from_disk('index.json') - - # For Q-A set this value to 4, For Content-Genration set this value b/w 7-10. - data_chunks = 5 - - QA_PROMPT_TMPL = ( - "We have provided context information below. \n" - "---------------------\n" - "{context_str}" - "\n---------------------\n" - "Given this information, please answer the question at the end of this main prompt: "+prompt+" {query_str}\n" - ) - - QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL) - - query = question - # relevant info from brain goes here - info = ["pdf"] - - llm_predictor = LLMPredictor(llm=OpenAI( - temperature=temperature, model_name="text-davinci-003", max_tokens=maxTokens)) - - service_context_gpt4 = ServiceContext.from_defaults( - llm_predictor=llm_predictor) - - response = index.query(query, service_context=service_context_gpt4, - similarity_top_k=data_chunks, response_mode="compact",text_qa_template=QA_PROMPT) - print(question) - print(response) - if(response.response==None): - return response,False - memory = ConversationBufferMemory(memory_key="chat_history") - memory.chat_memory.add_user_message(question) - memory.chat_memory.add_ai_message(response.response) - return response, memory - - -def getBrains(name): - pinecone.init(api_key=pinecone_key, - environment="us-west4-gcp") - active_indexes = pinecone.list_indexes() - print(active_indexes) - name = name.lower() - if name in active_indexes: - return True - else: - return False - - -def runAgent(brainName,memory, question, temperature, maxTokens): - if (memory == False): - return "Please Initiate the Chat first.." - temperature = float(temperature) - pinecone.init(api_key=pinecone_key, - environment="us-west4-gcp") - pineconeindex = pinecone.Index(brainName) - index = GPTPineconeIndex([], pinecone_index=pineconeindex) - # memory = ConversationBufferMemory(memory_key="chat_history") - print(memory.chat_memory) - llm = OpenAI( - temperature=temperature, model_name="text-davinci-003", max_tokens=maxTokens) - tool_config = IndexToolConfig( - index=index, - name="Vector Index", - description="Use this tool if you can't find the required Information in the previous message history", - index_query_kwargs={"similarity_top_k": 4, "response_mode": "compact"}, - tool_kwargs={"return_direct": True} - ) - - toolkit = LlamaToolkit(index_configs=[tool_config]) - - agent_chain = create_llama_chat_agent( - toolkit, llm, memory=memory, verbose=True) - response = agent_chain.run(question) - print(memory.chat_memory) - return response, memory diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/train-index.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/train-index.py deleted file mode 100644 index 04396a2241ed27c999a6687aa7b9880941edbcf3..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/train-index.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -格式:直接cid为自带的index位;aid放不下了,通过字典来查,反正就5w个 -""" -import faiss, numpy as np, os - -# ###########如果是原始特征要先写save -inp_root = r"E:\codes\py39\dataset\mi\2-co256" -npys = [] -for name in sorted(list(os.listdir(inp_root))): - phone = np.load("%s/%s" % (inp_root, name)) - npys.append(phone) -big_npy = np.concatenate(npys, 0) -print(big_npy.shape) # (6196072, 192)#fp32#4.43G -np.save("infer/big_src_feature_mi.npy", big_npy) - -##################train+add -# big_npy=np.load("/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/inference_f0/big_src_feature_mi.npy") -print(big_npy.shape) -index = faiss.index_factory(256, "IVF512,Flat") # mi -print("training") -index_ivf = faiss.extract_index_ivf(index) # -index_ivf.nprobe = 9 -index.train(big_npy) -faiss.write_index(index, "infer/trained_IVF512_Flat_mi_baseline_src_feat.index") -print("adding") -index.add(big_npy) -faiss.write_index(index, "infer/added_IVF512_Flat_mi_baseline_src_feat.index") -""" -大小(都是FP32) -big_src_feature 2.95G - (3098036, 256) -big_emb 4.43G - (6196072, 192) -big_emb双倍是因为求特征要repeat后再加pitch - -""" diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/evaluation/instance_evaluation.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/evaluation/instance_evaluation.py deleted file mode 100644 index bc2facec351e5f6ee965ee9acb4394f12c023f54..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/evaluation/instance_evaluation.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_json -from detectron2.evaluation.coco_evaluation import COCOEvaluator, _evaluate_predictions_on_coco -from detectron2.evaluation.fast_eval_api import COCOeval_opt -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - - -# modified from COCOEvaluator for instance segmetnat -class InstanceSegEvaluator(COCOEvaluator): - """ - Evaluate AR for object proposals, AP for instance detection/segmentation, AP - for keypoint detection outputs using COCO's metrics. - See http://cocodataset.org/#detection-eval and - http://cocodataset.org/#keypoints-eval to understand its metrics. - The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means - the metric cannot be computed (e.g. due to no predictions made). - - In addition to COCO, this evaluator is able to support any bounding box detection, - instance segmentation, or keypoint detection dataset. - """ - - def _eval_predictions(self, predictions, img_ids=None): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id - # all_contiguous_ids = list(dataset_id_to_contiguous_id.values()) - # num_classes = len(all_contiguous_ids) - # assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1 - - reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()} - for result in coco_results: - category_id = result["category_id"] - # assert category_id < num_classes, ( - # f"A prediction has class={category_id}, " - # f"but the dataset only has {num_classes} classes and " - # f"predicted class id should be in [0, {num_classes - 1}]." - # ) - assert category_id in reverse_id_mapping, ( - f"A prediction has class={category_id}, " - f"but the dataset only has class ids in {dataset_id_to_contiguous_id}." - ) - result["category_id"] = reverse_id_mapping[category_id] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res diff --git a/spaces/shivammittal274/LLM_CA/README.md b/spaces/shivammittal274/LLM_CA/README.md deleted file mode 100644 index 02906486c56f6d599fe606225607a3b6bf28d3a2..0000000000000000000000000000000000000000 --- a/spaces/shivammittal274/LLM_CA/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LLM CA -emoji: 🌍 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Hack Clash Royale and Get Unlimited Gems Gold and Elixir.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Hack Clash Royale and Get Unlimited Gems Gold and Elixir.md deleted file mode 100644 index 65a17ac9d1eabc3a57b686d297a443d51e18a8d9..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Hack Clash Royale and Get Unlimited Gems Gold and Elixir.md +++ /dev/null @@ -1,78 +0,0 @@ -
                  -

                  Download Hack Clash Royale: How to Get Unlimited Gems and Gold

                  -

                  Do you love playing Clash Royale but hate spending money on gems and gold? Do you want to unlock all the chests, cards, and upgrades without waiting for hours or days? Do you want to dominate the arena and crush your opponents with ease? If you answered yes to any of these questions, then you need to download hack clash royale and get unlimited gems and gold.

                  -

                  download hack clash royale


                  Download File ✪✪✪ https://ssurll.com/2uNQMz



                  -

                  What is Clash Royale and why do you need gems and gold?

                  -

                  Clash Royale is a popular strategy game for mobile devices

                  -

                  Clash Royale is a free-to-play multiplayer online battle arena (MOBA) game developed by Supercell, the makers of Clash of Clans. In this game, you can collect and upgrade dozens of cards featuring characters, spells, and defenses from the Clash universe. You can also form clans with other players and share cards and strategies. The goal of the game is to destroy your opponent's towers and king tower before they destroy yours.

                  -

                  Gems and gold are the main currencies in the game

                  -

                  Gems and gold are the two resources that you need to progress in Clash Royale. Gems are used to speed up chest opening, buy chests, cards, or gold from the shop, or enter special events. Gold is used to upgrade your cards or buy cards from the shop. You can earn gems and gold by winning battles, opening chests, completing quests, or watching ads. However, these methods are slow and limited. You can also buy gems and gold with real money, but that can be expensive and not everyone can afford it.

                  -

                  You can use gems and gold to unlock chests, cards, upgrades, and more

                  -

                  Gems and gold are essential for improving your deck and increasing your chances of winning. With gems and gold, you can unlock more chests that contain cards, gold, or gems. You can also buy more cards from the shop or upgrade your existing cards to make them stronger. You can also enter special events that offer exclusive rewards or challenges. With more gems and gold, you can have more fun and variety in Clash Royale.

                  -

                  How to download hack clash royale and get unlimited gems and gold?

                  -

                  There are many websites and apps that claim to offer clash royale hacks

                  -

                  If you search online for "download hack clash royale", you will find many results that promise to give you unlimited gems and gold for free. Some of them may ask you to download an app or a file, while others may ask you to complete a survey or verify your identity. Some of them may even ask you to enter your account details or personal information.

                  -

                  How to download Gacha Life for free on your laptop
                  -Gacha Life PC: A free dress-up role playing game for Windows
                  -Gacha Life Studio Mode: Create your own anime scenes and stories
                  -Gacha Life online: Chat and play with other players around the world
                  -Gacha Life mini-games: Earn gems and unlock new items for your characters
                  -Gacha Life characters: Customize your own anime-style avatars with hundreds of options
                  -Gacha Life outfits: Mix and match different clothing parts, accessories, and weapons
                  -Gacha Life hairstyles: Choose from various styles and colors for your hair
                  -Gacha Life eyes: Change the shape, color, and expression of your eyes
                  -Gacha Life backgrounds: Explore different locations and scenarios for your scenes
                  -Gacha Life tips and tricks: Learn how to use the game features and improve your skills
                  -Gacha Life hacks and cheats: Find out how to get unlimited gems and resources in the game
                  -Gacha Life download link: Where to get the latest version of the game for your laptop
                  -Gacha Life system requirements: What you need to run the game smoothly on your laptop
                  -Gacha Life reviews: What other players think about the game and its features
                  -Gacha Life updates: What's new in the latest version of the game and what to expect in the future
                  -Gacha Life fan art: See some amazing creations made by other players using the game
                  -Gacha Life memes: Enjoy some funny and relatable memes about the game and its characters
                  -Gacha Life songs: Listen to some catchy and original songs inspired by the game and its characters
                  -Gacha Life videos: Watch some entertaining and informative videos about the game and its characters
                  -Gacha Life wallpapers: Download some beautiful and high-quality wallpapers for your laptop featuring the game and its characters
                  -Gacha Life stickers: Get some cute and cool stickers for your laptop featuring the game and its characters
                  -Gacha Life merchandise: Buy some awesome and affordable products related to the game and its characters
                  -Gacha Life cosplay: See some impressive and creative costumes made by other players inspired by the game and its characters
                  -Gacha Life quizzes: Test your knowledge and personality with some fun and challenging quizzes about the game and its characters

                  -

                  However, most of them are fake, unsafe, or illegal

                  -

                  The truth is that most of these websites and apps are scams that want to steal your data, infect your device with malware, or make money from ads or surveys. They do not work as advertised and they may harm your device or account. Some of them may even get you banned from the game or face legal consequences for violating the terms of service.

                  -

                  The best way to download hack clash royale is to use a bot_Clash_Royale

                  -

                  The only reliable and safe way to download hack clash royale is to use a bot_Clash_Royale. A bot_Clash_R

                  What is bot_Clash_Royale and how does it work?

                  -

                  A bot_Clash_Royale is a software program that automates the gameplay of Clash Royale for you. It can perform various tasks such as opening chests, collecting rewards, donating cards, requesting cards, playing battles, and more. It can also generate unlimited gems and gold for you by exploiting a glitch in the game server. A bot_Clash_Royale works by simulating human actions and sending requests to the game server. It does not require any root or jailbreak and it is undetectable by the game's anti-cheat system.

                  -

                  How to install and use bot_Clash_Royale?

                  -

                  To install and use bot_Clash_Royale, you need to follow these simple steps:

                  -
                    -
                  1. Download the bot_Clash_Royale app from the official website .
                  2. -
                  3. Install the app on your device and launch it.
                  4. -
                  5. Enter your Clash Royale username and select your device type (Android or iOS).
                  6. -
                  7. Select the features you want to activate, such as gems, gold, chests, cards, etc.
                  8. -
                  9. Tap on the "Start" button and wait for the bot to do its magic.
                  10. -
                  11. Enjoy your unlimited gems and gold and dominate the game.
                  12. -
                  -

                  What are the benefits and risks of using bot_Clash_Royale?

                  -

                  Using bot_Clash_Royale has many benefits and some risks. Here are some of them:

                  - - - - - - -
                  BenefitsRisks
                  You can get unlimited gems and gold for free.You may encounter some bugs or errors while using the bot.
                  You can unlock all the chests, cards, and upgrades without waiting or spending money.You may lose some of the fun and challenge of playing the game.
                  You can win more battles and climb the ladder faster.You may get bored or addicted to the game.
                  You can save time and energy by letting the bot play for you.You may violate the terms of service of the game and risk getting banned or sued.
                  -

                  Conclusion

                  -

                  In conclusion, Clash Royale is a fun and addictive game that requires gems and gold to progress. However, earning gems and gold can be slow, limited, or expensive. That's why many players look for ways to download hack clash royale and get unlimited gems and gold. The best way to do that is to use a bot_Clash_Royale, which is a safe and reliable software that automates the gameplay and generates unlimited resources for you. With a bot_Clash_Royale, you can enjoy the game without any hassle or cost. However, you should also be aware of the risks and consequences of using a hack. You should use it responsibly and at your own risk.

                  -

                  FAQs

                  -

                  Q: Is bot_Clash_Royale free?

                  -

                  A: Yes, bot_Clash_Royale is free to download and use. However, you may need to complete a human verification process before accessing the app.

                  -

                  Q: Is bot_Clash_Royale safe?

                  -

                  A: Yes, bot_Clash_Royale is safe to use. It does not contain any viruses or malware and it does not harm your device or account. It is also undetectable by the game's anti-cheat system.

                  -

                  Q: Is bot_Clash_Royale legal?

                  -

                  A: No, bot_Clash_Royale is not legal. It violates the terms of service of Clash Royale and Supercell. Using a hack may result in account suspension or termination, legal action, or other penalties.

                  -

                  Q: How often does bot_Clash_Royale update?

                  -

                  A: Bot_Clash_Royale updates regularly to keep up with the latest version of Clash Royale and fix any bugs or errors. You can check for updates on the official website or on the app itself.

                  -

                  Q: Can I use bot_Clash_Royale on multiple devices or accounts?

                  -

                  A: Yes, you can use bot_Clash_Royale on multiple devices or accounts. However, you should not use it on more than one device or account at the same time. This may cause conflicts or errors with the game server or the bot itself.

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile 2023 MOD APK The Ultimate World Cup Experience with Unlimited Money.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile 2023 MOD APK The Ultimate World Cup Experience with Unlimited Money.md deleted file mode 100644 index 0932bfdf91e21d60b9e6df1e46840595f0f66d78..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile 2023 MOD APK The Ultimate World Cup Experience with Unlimited Money.md +++ /dev/null @@ -1,75 +0,0 @@ -
                  -

                  Download FIFA Mobile 2023 Mod APK Unlimited Money

                  -

                  If you are a fan of soccer games, you must have heard of FIFA Mobile, the popular mobile game from EA Sports that lets you play with your favorite soccer stars and teams. But did you know that there is a new version of FIFA Mobile coming out in 2023? And that you can download a modded version of it that gives you unlimited money and access to all features and modes? In this article, we will tell you everything you need to know about FIFA Mobile 2023 Mod APK, including what it is, what features it has, why you should download it, and how to download and install it on your device. So, let's get started!

                  -

                  What is FIFA Mobile 2023?

                  -

                  FIFA Mobile 2023 is the latest edition of the FIFA Mobile series, which is based on the FIFA World Cup 2022™ that will take place in Qatar. The game will feature updated players, kits, clubs, and leagues to reflect the real world 22/23 soccer season. You will be able to build your ultimate team with over 15,000 authentic soccer stars from over 600 teams, including Chelsea, Paris SG, Real Madrid, Liverpool, and Juventus. You will also be able to relive the world's greatest soccer tournament with the FIFA World Cup 2022™ mode, where you can replay the official tournament brackets with any of the 32 qualified nations. You can also play with soccer icons and heroes from over 30+ leagues, such as Paolo Maldini, Ronaldinho, Kylian Mbappé, and more. And you can experience immersive next-level soccer simulation with upgraded stadiums, realistic graphics, and live commentary. FIFA Mobile 2023 is the ultimate soccer game for mobile devices.

                  -

                  download fifa mobile 2023 mod apk unlimited money


                  Download File ::: https://ssurll.com/2uO0UW



                  -

                  Features of FIFA Mobile 2023

                  -

                  FIFA Mobile 2023 has many features that make it stand out from other soccer games. Here are some of them:

                  -

                  Build your ultimate team with star players from the biggest leagues and top teams

                  -

                  You can create your own dream team in FIFA Mobile 2023 by collecting player items and putting your favorite soccer stars to the test. You can score goals with some of the world's best players as you level up a team of soccer superstars. You can compete against the best in pvp modes, such as Head-to-Head, VS Attack, and Manager Mode. You can also customize your team's formation, tactics, kits, and badges to suit your style.

                  -

                  Relive the FIFA World Cup 2022™ mode with official licenses

                  -

                  You can relive the world's greatest soccer tournament in FIFA Mobile 2023 with the FIFA World Cup 2022™ mode. You can unlock soccer stars from all 32 qualified national teams with official licenses. You can also enjoy authentic World Cup national team kits and badges, the official match ball, and play in World Cup stadiums (Al Bayt and Lusail). You can also listen to localized World Cup commentary to bring the most immersive match atmosphere.

                  -

                  Play with soccer icons and heroes from over 30+ leagues

                  -

                  You can play with soccer icons and heroes from over 30+ leagues in FIFA Mobile 2023. You can score big with world soccer icons like Paolo Maldini, Ronaldinho, & Kylian Mbappé, and more. You can also play with soccer heroes from over 30+ leagues, such as Lionel Messi, Cristiano Ronaldo, Neymar Jr., and more. You can also unlock special items and rewards by completing soccer icon and hero challenges.

                  -

                  Experience immersive next-level soccer simulation with upgraded stadiums and commentary

                  -

                  You can experience immersive next-level soccer simulation in FIFA Mobile 2023 with upgraded stadiums and commentary. You can play in stunning 3D stadiums that capture the essence of the world's most famous soccer venues, such as Camp Nou, Santiago Bernabéu, Old Trafford, and more. You can also enjoy realistic graphics and animations that bring the game to life. You can also listen to live commentary from some of the best soccer commentators in the world, such as Martin Tyler, Alan Smith, Derek Rae, and more.

                  -

                  Manage your own dream team with manager mode

                  -

                  You can manage your own dream team in FIFA Mobile 2023 with manager mode. You can take charge of every aspect of your team, from transfers, contracts, training, tactics, and more. You can also scout and sign new players to improve your squad. You can also compete against other managers in online leagues and tournaments. You can also earn rewards and trophies by leading your team to glory.

                  -

                  Why download FIFA Mobile 2023 Mod APK?

                  -

                  If you are wondering why you should download FIFA Mobile 2023 Mod APK, here are some reasons:

                  -

                  Unlocked all features and modes

                  -

                  With FIFA Mobile 2023 Mod APK, you can unlock all the features and modes that the game has to offer. You can access the FIFA World Cup 2022™ mode, the soccer icons and heroes mode, the manager mode, and more. You can also play without any restrictions or limitations.

                  -

                  download fifa mobile 2023 mod menu apk with unlimited coins and gems
                  -how to download fifa mobile 2023 mod apk hack for android and ios
                  -download fifa soccer mobile 2023 mod apk latest version with all players unlocked
                  -download fifa mobile 2023 mod apk offline mode with unlimited money and energy
                  -download fifa mobile 2023 mod apk free shopping and no ads
                  -download fifa mobile 2023 mod apk unlimited money and points
                  -download fifa mobile 2023 mod apk full unlocked with world cup mode
                  -download fifa mobile 2023 mod apk unlimited money and tokens
                  -download fifa mobile 2023 mod apk mega mod with all features unlocked
                  -download fifa mobile 2023 mod apk unlimited money and gold
                  -download fifa mobile 2023 mod apk vip mod with premium features
                  -download fifa mobile 2023 mod apk unlimited money and diamonds
                  -download fifa mobile 2023 mod apk unlimited money and stars
                  -download fifa mobile 2023 mod apk unlimited money and credits
                  -download fifa mobile 2023 mod apk unlimited money and cash
                  -download fifa mobile 2023 mod apk pro mod with advanced settings
                  -download fifa mobile 2023 mod apk unlimited money and kits
                  -download fifa mobile 2023 mod apk unlimited money and skills
                  -download fifa mobile 2023 mod apk god mode with invincibility and high damage
                  -download fifa mobile 2023 mod apk unlimited money and transfers
                  -download fifa mobile 2023 mod apk unlimited money and stamina
                  -download fifa mobile 2023 mod apk super mod with all leagues and teams unlocked
                  -download fifa mobile 2023 mod apk unlimited money and badges
                  -download fifa mobile 2023 mod apk unlimited money and trophies
                  -download fifa mobile 2023 mod apk ultimate mod with all modes and options unlocked

                  -

                  Unlimited money and coins to buy players and items

                  -

                  With FIFA Mobile 2023 Mod APK, you can get unlimited money and coins to buy players and items in the game. You can buy any player you want from the market or the packs. You can also buy any item you need to boost your team's performance, such as kits, balls, stadiums, and more.

                  -

                  Menu mod to customize your gameplay settings

                  -

                  With FIFA Mobile 2023 Mod APK, you can customize your gameplay settings with the menu mod. You can adjust the difficulty level, the speed, the graphics quality, the sound effects, and more. You can also enable or disable cheats, such as auto-win, god mode, unlimited stamina, and more.

                  -

                  How to download and install FIFA Mobile 2023 Mod APK?

                  -

                  If you want to download and install FIFA Mobile 2023 Mod APK on your device, here are the steps you need to follow:

                  -

                  Step 1: Download the mod APK file from a trusted source

                  -

                  The first step is to download the mod APK file from a trusted source. You can find many websites that offer modded versions of FIFA Mobile 2023 on the internet. However, not all of them are safe or reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you should only download the mod APK file from a trusted source that has positive reviews and feedback from other users.

                  -

                  Step 2: Enable unknown sources on your device settings

                  -

                  The second step is to enable unknown sources on your device settings. This is because most devices do not allow installing apps from sources other than the official app store. To enable unknown sources on your device settings, you need to go to Settings > Security > Unknown Sources and toggle it on.

                  -

                  Step 3: Install the mod APK file and launch the game

                  -

                  The third step is to install the mod APK file and launch the game. To install the mod APK file, you need to locate it on your device storage and tap on it. Then, follow the instructions on the screen to complete the installation process. Once the installation is done, you can launch the game from your app drawer or home screen.

                  -

                  Conclusion

                  -

                  FIFA Mobile 2023 is a great soccer game for mobile devices that lets you play with your favorite soccer stars and teams. However, if you want to enjoy all the features and modes that the game has to offer without spending any money or facing any restrictions, you should download FIFA Mobile 2023 Mod APK. This modded version of the game gives you unlimited money and coins to buy players and items, unlocks all features and modes, and lets you customize your gameplay settings with a menu mod. To download and install FIFA Mobile 2023 Mod APK on your device, you just need to follow the steps that we have explained in this article. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about FIFA Mobile 2023 Mod APK:

                  -

                  Is FIFA Mobile 2023 Mod APK safe to use?

                  -

                  Yes, FIFA Mobile 2023 Mod APK is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources and scan them with an antivirus app before installing them.

                  -

                  Is FIFA Mobile 2023 Mod APK compatible with my device?

                  -

                  FIFA Mobile 2023 Mod APK is compatible with most Android devices that have Android 5.0 or higher. However, some devices may not support the game due to hardware limitations or software issues. You can check the compatibility of your device by visiting the official website of FIFA Mobile 2023.

                  -

                  Can I play FIFA Mobile 2023 Mod APK online with other players?

                  -

                  Yes, you can play FIFA Mobile 2023 Mod APK online with other players who have the same modded version of the game. However, you may not be able to play with players who have the original version of the game or a different modded version of the game. You may also face some errors or glitches when playing online due to the modded features.

                  -

                  Can I update FIFA Mobile 2023 Mod APK to the latest version?

                  -

                  No, you cannot update FIFA Mobile 2023 Mod APK to the latest version from the app store or the official website of FIFA Mobile 2023. This is because the modded version of the game is not supported by EA Sports and may not work with the latest updates. If you want to update the game, you will have to download and install a new modded version of the game from a trusted source.

                  -

                  Can I restore my progress if I uninstall FIFA Mobile 2023 Mod APK?

                  -

                  No, you cannot restore your progress if you uninstall FIFA Mobile 2023 Mod APK. This is because the modded version of the game does not sync with your Google Play account or Facebook account. If you want to save your progress, you will have to back up your data manually or use a third-party app to do so.

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Family Town APK A Fun and Challenging Puzzle Game for All Ages.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Family Town APK A Fun and Challenging Puzzle Game for All Ages.md deleted file mode 100644 index 5efccb119852ddba547bebe52a1af1048da21359..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Family Town APK A Fun and Challenging Puzzle Game for All Ages.md +++ /dev/null @@ -1,170 +0,0 @@ -
                  -

                  Family Town: Match-3 Makeover - A Fun and Fashionable Game for Android

                  -

                  If you love match-3 puzzle games and fashion makeover games, then you will love Family Town: Match-3 Makeover. This is a game that combines both genres into one, giving you a chance to express your creativity and have fun at the same time. In this game, you will help a family of fashionistas renovate their old mansion, design their outfits, and discover their stories. You will also enjoy challenging and addictive match-3 puzzles with unique mechanics and boosters. Family Town: Match-3 Makeover is a game that will keep you entertained for hours, whether you play it on your phone or tablet.

                  -

                  family town apk


                  Download File ————— https://ssurll.com/2uNTRc



                  -

                  Introduction

                  -

                  What is Family Town: Match-3 Makeover?

                  -

                  Family Town: Match-3 Makeover is a free-to-play game developed by PlayFlock, a company that specializes in casual games for mobile devices. The game was released in 2021 and has received positive reviews from players and critics alike. The game has over 1 million downloads on Google Play Store and a rating of 4.5 out of 5 stars.

                  -

                  Why should you play Family Town: Match-3 Makeover?

                  -

                  There are many reasons why you should play Family Town: Match-3 Makeover, but here are some of the main ones:

                  -
                    -
                  • You will enjoy a fun and relaxing gameplay that combines match-3 puzzles and fashion makeover.
                  • -
                  • You will explore a beautiful mansion and its surroundings, and decorate them according to your taste.
                  • -
                  • You will meet a family of stylish characters, each with their own personality and story.
                  • -
                  • You will create stunning outfits for the family members, using hundreds of clothes and accessories.
                  • -
                  • You will experience an engaging storyline with twists and turns, romance and drama.
                  • -
                  • You will play offline or online, whenever and wherever you want.
                  • -
                  -

                  Features of Family Town: Match-3 Makeover

                  -

                  Match-3 puzzles with a twist

                  -

                  Family Town: Match-3 Makeover is not your typical match-3 puzzle game. It has some unique features that make it more fun and challenging. For example:

                  -
                    -
                  • You can swap any two adjacent tiles, not just those that form a match.
                  • -
                  • You can create special tiles by matching more than three tiles of the same color or shape.
                  • -
                  • You can use different types of boosters and power-ups to help you clear the board faster.
                  • -
                  • You can face various obstacles and goals in each level, such as ice, crates, keys, locks, flowers, etc.
                  • -
                  • You can earn stars by completing levels with fewer moves or higher scores.
                  • -
                  -

                  Fashion makeover with endless possibilities

                  -

                  Family Town: Match-3 Makeover is also a fashion makeover game that lets you unleash your inner stylist. You can choose from hundreds of clothes and accessories to dress up the family members. You can also mix and match different items to create your own unique looks. You can customize the following aspects of each character:

                  -
                    -
                  • Hair style and color
                  • -
                  • Eye color and shape
                  • -
                  • Skin tone
                  • -
                  • Makeup
                  • -
                  • ClothesAccessories
                  • -
                  • Shoes
                  • -
                  -

                  You can also change the outfits of the characters according to the seasons, occasions, and events. For example, you can dress them up for a winter holiday, a summer vacation, a Halloween party, a wedding, etc.

                  -

                  family town match 3 makeover apk
                  -family town apk download
                  -family town apk mod
                  -family town game apk
                  -family town android apk
                  -family town apk latest version
                  -family town apk free download
                  -family town apk offline
                  -family town apk hack
                  -family town apk unlimited money
                  -family town apk update
                  -family town apk old version
                  -family town apk pure
                  -family town apk for pc
                  -family town apk full version
                  -family town home makeover apk
                  -family town story apk
                  -family town garden decoration apk
                  -family town puzzle match apk
                  -family town mansion story apk
                  -family town playflock apk
                  -family town app apk
                  -family town modded apk
                  -family town cracked apk
                  -family town premium apk
                  -family town pro apk
                  -family town unlocked apk
                  -family town cheat apk
                  -family town mega mod apk
                  -family town unlimited coins apk
                  -my family town game apk
                  -my family town mod apk
                  -my family town hack apk
                  -my family town unlimited money apk
                  -my family town offline apk
                  -my family town latest version apk
                  -my family town free download apk
                  -my family town android game apk
                  -my family town simulation game apk
                  -my family town 3d game apk
                  -my little pony magic princess in ponyville and equestria - my little pony games for kids and girls - my little pony friendship is magic - my little pony equestria girls - my little pony the movie - my little pony harmony quest - my little pony rainbow runners - my little pony coloring book - my little pony pocket ponies - my little pony celebration - my little pony magic princess quest of the friendship games - my little pony bake shop - my little pony rainbow roadtrip - my little pony cutie mark crew - my little pony guardians of harmony - my little pony friendship celebrations - my little pony equestria girls minis - my little pony equestria girls fashion squad - my little pony equestria girls dance magic - my little pony equestria girls legend of everfree - my little pony equestria girls friendship games - my little pony equestria girls rainbow rocks - my little pony equestria girls better together - my little pony equestria girls choose your own ending - my little pony equestria girls summertime shorts - my little pony equestria girls digital series - my little pony equestria girls forgotten friendship - my little pony equestria girls rollercoaster of friendship - my little pony equestria girls spring breakdown - my little pony equestria girls sunset's backstage pass

                  -

                  Home and garden decoration with your own style

                  -

                  Family Town: Match-3 Makeover is also a home and garden decoration game that lets you express your creativity. You can renovate and decorate the mansion and its surroundings, using various furniture and items. You can also choose from different themes and styles, such as modern, classic, rustic, vintage, etc. You can customize the following areas of the mansion:

                  -
                    -
                  • Lobby
                  • -
                  • Living room
                  • -
                  • Kitchen
                  • -
                  • Dining room
                  • -
                  • Bedrooms
                  • -
                  • Bathrooms
                  • -
                  • Attic
                  • -
                  • Balcony
                  • -
                  • Garden
                  • -
                  • Pool
                  • -
                  • Garage
                  • -
                  -

                  Storyline with interesting characters and events

                  -

                  Family Town: Match-3 Makeover is not just a game of puzzles and makeovers. It is also a game of stories and emotions. You will follow the lives of the family members, who have their own dreams and secrets. You will also meet other characters, such as neighbors, friends, rivals, and love interests. You will witness how they interact with each other, and how they cope with various situations. You will also have choices to make that will affect the outcome of the story.

                  -

                  How to download and install Family Town: Match-3 Makeover APK

                  -

                  Download from Google Play Store

                  -

                  The easiest way to download and install Family Town: Match-3 Makeover APK is to use the Google Play Store app on your Android device. Here are the steps to follow:

                  -
                    -
                  1. Open the Google Play Store app on your device.
                  2. -
                  3. Search for "Family Town: Match-3 Makeover" in the search bar.
                  4. -
                  5. Select the game from the list of results and tap on "Install".
                  6. -
                  7. Wait for the download and installation to finish.
                  8. -
                  9. Tap on "Open" to launch the game and enjoy.
                  10. -
                  -

                  Download from APKCombo

                  -

                  If you cannot access the Google Play Store app on your device, or if you want to download an older version of Family Town: Match-3 Makeover APK, you can use a third-party website such as APKCombo. Here are the steps to follow:

                  -
                    -
                  1. Open your web browser on your device and go to https://apkcombo.com/.
                  2. -
                  3. Search for "Family Town: Match-3 Makeover" in the search bar.
                  4. -
                  5. Select the game from the list of results and tap on "Download APK".
                  6. -
                  7. Choose the version and architecture that suit your device and tap on "Download".
                  8. -
                  9. Wait for the download to finish and locate the APK file in your device's storage.
                  10. -
                  11. Tap on the APK file and allow installation from unknown sources if prompted.
                  12. -
                  13. Follow the instructions on the screen to install the game.
                  14. -
                  15. Tap on "Open" to launch the game and enjoy.
                  16. -
                  -

                  Tips and tricks for playing Family Town: Match-3 Makeover

                  -

                  Use boosters and power-ups wisely

                  -

                  Boosters and power-ups are special items that can help you clear the board faster and easier. They can be obtained by matching special tiles, completing levels, or buying them with coins or gems. However, they are limited in number and should be used strategically. Here are some tips on how to use them:

                  -
                    -
                  • Use boosters before starting a level to get an advantage. For example, use a hammer to remove a tile of your choice, or use a bomb to clear a 3x3 area.
                  • -
                  • Use power-ups during a level to create more matches or clear obstacles. For example, use a rainbow to match any tile of your choice, or use a rocket to clear a row or column.
                  • -
                  • Save your boosters and power-ups for harder levels or when you are stuck. Don't waste them on easy levels or when you have plenty of moves left.
                  • -
                  -

                  Complete daily tasks and achievements for rewards

                  -

                  Daily tasks and achievements are goals that you can complete by playing the game regularly. They can be found in the menu at the bottom of the screen. By completing them, you can earn various rewards, such as coins, gems, boosters, power-ups, and clothes. Here are some tips on how to complete them:

                  -
                    -
                  • Check the daily tasks and achievements every day and try to complete as many as possible. They will refresh every 24 hours, so don't miss the chance to claim your rewards.
                  • -
                  • Focus on the tasks and achievements that match your play style and goals. For example, if you want to decorate the mansion faster, focus on the tasks and achievements that require you to earn stars or complete levels.
                  • -
                  • Use your boosters and power-ups to help you complete the tasks and achievements that are too hard or time-consuming. For example, if you need to clear a certain number of tiles or obstacles in a level, use a bomb or a rocket to speed up the process.
                  • -
                  -

                  Connect with Facebook to play with friends and save your progress

                  -

                  Family Town: Match-3 Makeover is more fun when you play with friends. You can connect your game account with your Facebook account to enjoy the following benefits:

                  -
                    -
                  • You can invite your Facebook friends to play the game and see their progress on the map.
                  • -
                  • You can send and receive lives, coins, gems, and boosters from your Facebook friends.
                  • -
                  • You can compete with your Facebook friends on the leaderboard and see who has the highest score.
                  • -
                  • You can save your game progress on the cloud and sync it across multiple devices.
                  • -
                  -

                  To connect your game account with your Facebook account, follow these steps:

                  -
                    -
                  1. Tap on the gear icon at the top right corner of the screen to open the settings menu.
                  2. -
                  3. Tap on the "Connect" button next to the Facebook logo.
                  4. -
                  5. Log in with your Facebook credentials and allow the game to access your profile information.
                  6. -
                  7. Enjoy playing with your Facebook friends and saving your progress.
                  8. -
                  -

                  Conclusion

                  -

                  Family Town: Match-3 Makeover is a game that offers a lot of fun and entertainment for anyone who loves match-3 puzzle games and fashion makeover games. You can play it for free on your Android device and enjoy its features, such as:

                  -
                    -
                  • Match-3 puzzles with a twist
                  • -
                  • Fashion makeover with endless possibilities
                  • -
                  • Home and garden decoration with your own style
                  • -
                  • Storyline with interesting characters and events
                  • -
                  -

                  You can also download and install Family Town: Match-3 Makeover APK from Google Play Store or APKCombo, depending on your preference. You can also use some tips and tricks to improve your gameplay, such as using boosters and power-ups wisely, completing daily tasks and achievements for rewards, and connecting with Facebook to play with friends and save your progress.

                  -

                  FAQs

                  -

                  What are the system requirements for Family Town: Match-3 Makeover?

                  -

                  The game requires Android 5.0 or higher and at least 100 MB of free storage space on your device.

                  -

                  How can I contact the developer of Family Town: Match-3 Makeover?

                  -

                  You can contact PlayFlock by sending an email to support@playflock.com or by visiting their website at https://playflock.com/.

                  -

                  How can I get more coins and gems in Family Town: Match-3 Makeover?

                  -

                  You can get more coins and gems by completing levels, earning stars, completing tasks and achievements, watching ads, spinning the wheel, opening chests, or buying them with real money.

                  -

                  How can I change the language of Family Town: Match-3 Makeover?

                  -

                  You can change the language of the game by tapping on the gear icon at the top right corner of the screen, then tapping on the "Language" button, and choosing from the available options.

                  -

                  How can I reset my game progress in Family Town: Match-3 Makeover?

                  -

                  You can reset your game progress by tapping on the gear icon at the top right corner of the screen, then tapping on the "Reset" button, and confirming your choice. However, this will delete all your data and purchases, so be careful before doing this.

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/classification/readme.md b/spaces/skf15963/summary/fengshen/examples/classification/readme.md deleted file mode 100644 index b90ce5a946acf55a6530b3c8d010a5ec2642f6ae..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/classification/readme.md +++ /dev/null @@ -1,23 +0,0 @@ -## 分类下游任务 - -在当前目录下,我们提供丰富的分类任务的示例,其中我们提供三个一键式运行的示例。 - -- demo_classification_afqmc_roberta.sh 使用DDP微调roberta -- demo_classification_afqmc_roberta_deepspeed.sh 结合deepspeed微调roberta,获得更快的运算速度 -- demo_classification_afqmc_erlangshen_offload.sh 仅需7G显存即可微调我们效果最好的二郎神系列模型 - -上述示例均采用AFQMC的数据集,关于数据集的介绍可以在[这里](https://www.cluebenchmarks.com/introduce.html)找到。 -同时我们处理过的数据文件已经放在Huggingface上,点击[这里](https://huggingface.co/datasets/IDEA-CCNL/AFQMC)直达源文件。 -仅需要按我们的格式稍微处理一下数据集,即可适配下游不同的分类任务。 -在脚本示例中,仅需要修改如下参数即可适配本地文件 -``` - --dataset_name IDEA-CCNL/AFQMC \ - --------> 修改为 - - --data_dir $DATA_DIR \ # 数据目录 - --train_data train.json \ # 数据文件 - --valid_data dev.json \ - --test_data test.json \ - -``` \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/models/zen1/__init__.py b/spaces/skf15963/summary/fengshen/models/zen1/__init__.py deleted file mode 100644 index 2dec07c8fb965677ba8c8d3b0a13809d0199d301..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/zen1/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .ngram_utils import ZenNgramDict, NGRAM_DICT_NAME -from .modeling import ZenConfig, ZenModel, ZenForPreTraining, ZenForTokenClassification, ZenForSequenceClassification -from .tokenization import BertTokenizer, BasicTokenizer, WordpieceTokenizer -version = "0.1.0" -__all__ = ['ZenNgramDict', 'NGRAM_DICT_NAME', "ZenConfig", "ZenModel", "ZenForPreTraining", "ZenForTokenClassification", - "ZenForSequenceClassification", "BertTokenizer", "BasicTokenizer", "WordpieceTokenizer"] diff --git a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/ops/grid_sample_gradfix.py b/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index ca6b3413ea72a734703c34382c023b84523601fd..0000000000000000000000000000000000000000 --- a/spaces/skimai/DragGAN_Streamlit/stylegan2/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import warnings -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - if not enabled: - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().') - return False - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models.py b/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models.py deleted file mode 100644 index daa6e8ca14ea91c89a318e85d9f182eb7d1bf025..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models.py +++ /dev/null @@ -1,40 +0,0 @@ -import argparse -import os -from os import path as osp - -from basicsr.utils.download_util import load_file_from_url - - -def download_pretrained_models(method, file_urls): - save_path_root = f'./weights/{method}' - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_url in file_urls.items(): - save_path = load_file_from_url(url=file_url, model_dir=save_path_root, progress=True, file_name=file_name) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument( - 'method', - type=str, - help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models.")) - args = parser.parse_args() - - file_urls = { - 'CodeFormer': { - 'codeformer.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' - }, - 'facelib': { - # 'yolov5l-face.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth', - 'detection_Resnet50_Final.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth', - 'parsing_parsenet.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth' - } - } - - if args.method == 'all': - for method in file_urls.keys(): - download_pretrained_models(method, file_urls[method]) - else: - download_pretrained_models(args.method, file_urls[args.method]) \ No newline at end of file diff --git a/spaces/smjain/unixshell_command_gen/README.md b/spaces/smjain/unixshell_command_gen/README.md deleted file mode 100644 index 3091855e911d82773d66f76ee1a208e39d1075ae..0000000000000000000000000000000000000000 --- a/spaces/smjain/unixshell_command_gen/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Unixshell Command Gen -emoji: 🦀 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sneedium/dvatch_captcha_sneedium_old/app.py b/spaces/sneedium/dvatch_captcha_sneedium_old/app.py deleted file mode 100644 index d24be292216a2bcba4be146be1a5ed436b2bcf15..0000000000000000000000000000000000000000 --- a/spaces/sneedium/dvatch_captcha_sneedium_old/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import os -os.system('pip install --upgrade gdown') -import gdown -gdown.download(id='1z0O-bBy1z6WVV1QBBbFz8biXGl7ni--r', output='workdir.zip') -os.system('unzip workdir.zip') - - -import glob -import gradio as gr -from demo import get_model, preprocess, postprocess, load -from utils import Config, Logger, CharsetMapper - -config = Config('configs/train_abinet.yaml') -config.model_vision_checkpoint = None -model = get_model(config) -model = load(model, 'workdir/train-abinet/best-train-abinet.pth') -charset = CharsetMapper(filename=config.dataset_charset_path, max_length=config.dataset_max_length + 1) - -def process_image(image): - img = image.convert('RGB') - img = preprocess(img, config.dataset_image_width, config.dataset_image_height) - res = model(img) - return postprocess(res, charset, 'alignment')[0][0] - -title = "Made with ABINet" -description = "I hate captchas" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Textbox(), - title=title, - description=description, - examples=glob.glob('figs_captchas/*.png')) - -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/sohomghosh/FinRead/README.md b/spaces/sohomghosh/FinRead/README.md deleted file mode 100644 index b0724bcf9e315c5306a0e2c7146e5b80f33bdbe5..0000000000000000000000000000000000000000 --- a/spaces/sohomghosh/FinRead/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: FinRead- Financial Readibility Assessment Tool -emoji: ​🤓​👨‍🏫​📚​ -colorFrom: purple -colorTo: red -sdk: gradio -app_file: app.py -pinned: true -license: mit ---- - -``bibtex -@proceedings{ghosh-2021-finread, - title = "FinRead: A Transfer Learning Based Tool to Assess Readability of Definitions of Financial Terms", - author = "Sohom Ghosh, Shovon Sengupta, Sudip Kumar Naskar, Sunny Kumar Singh", - booktitle = "Proceedings of the 18th International Conference on Natural Language Processing (ICON) : - System Demonstrations", - month = "dec", - year = "2021", - publisher = "NLP Association of India (NLPAI)", - url = "forthcoming", - intype = {to appear in}, - pre-print = "https://easychair.org/publications/preprint/1wvS" -} -``` \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py deleted file mode 100644 index 2a287a4e97c66acbd36897b25f2ece5494005f03..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import time -import torch -import sys -import subprocess - -argslist = list(sys.argv)[1:] -log_dir = argslist[-1] -num_gpus = torch.cuda.device_count() -argslist.append('--n_gpus={}'.format(num_gpus)) -workers = [] -job_id = time.strftime("%Y_%m_%d-%H%M%S") -argslist.append("--group_name=group_{}".format(job_id)) - -print("GPU log directory is {}".format(log_dir)) -os.makedirs(log_dir, exist_ok=True) -for i in range(num_gpus): - argslist.append('--rank={}'.format(i)) - stdout = None if i == 0 else open("{}/{}_GPU_{}.log".format(log_dir, job_id, i), - "w") - print(argslist) - p = subprocess.Popen([str(sys.executable)]+argslist, stdout=stdout) - workers.append(p) - argslist = argslist[:-1] - -for p in workers: - p.wait() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Crack Code Activation Obd Facile !EXCLUSIVE!.md b/spaces/stomexserde/gpt4-ui/Examples/Crack Code Activation Obd Facile !EXCLUSIVE!.md deleted file mode 100644 index fb8f8699c1e24678b2f32f822b9ef71ac9309d67..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Crack Code Activation Obd Facile !EXCLUSIVE!.md +++ /dev/null @@ -1,45 +0,0 @@ -
                  -

                  Crack Code Activation OBD Facile: How to Unlock the Full Potential of Your Car Diagnostic Tool

                  -

                  If you own a car, you probably know how important it is to keep it in good condition and prevent any problems that may affect its performance or safety. However, taking your car to a mechanic every time there is a warning light or a strange noise can be costly and time-consuming. That's why many car owners choose to use their own diagnostic tools that allow them to scan their car's system and identify any issues themselves.

                  -

                  One of these tools is OBD Facile, a software that allows you to diagnose your car using an ELM327 interface. This interface is a device that connects your car's OBD port (OBD stands for On-Board Diagnostics) to your PC, Mac, Android, or iOS device. By using this software, you can display engine and gearbox fault codes, specific manufacturer error codes, and real-time vehicle sensors. This way, you can find out what's wrong with your car and fix it yourself or take it to a professional with more knowledge.

                  -

                  Crack Code Activation Obd Facile


                  Downloadhttps://urlgoal.com/2uIc0O



                  -

                  However However, not all features and functions of OBD Facile are available for free. The software has three versions: Basic, Plus, and Ultimate. The Basic version is free, but it only allows you to read and erase generic fault codes and display some vehicle sensors. The Plus version costs 39.99 euros, and it adds the ability to read and erase specific manufacturer fault codes, display more vehicle sensors, and export data to Excel. The Ultimate version costs 59.99 euros, and it unlocks the full potential of the software, allowing you to access advanced parameters, graphs, data recording, and printing. The Ultimate version also supports more vehicles and protocols than the other versions.

                  -

                  So, what if you want to use the Ultimate version of OBD Facile without paying for it? Is there a way to get it for free? The answer is yes, there is a way to get the Ultimate version of OBD Facile for free, and that is by using the crack code activation for OBD Facile.

                  -

                  What is the Crack Code Activation for OBD Facile?

                  -

                  The crack code activation for OBD Facile is a serial number that you can use to activate the Ultimate version of the software without paying for it. The crack code activation for OBD Facile can be obtained from various sources online, such as forums, websites, or torrents. However, you should be careful when downloading the crack code activation file, as some of them may contain malware or viruses that can harm your device or compromise your data.

                  -

                  The crack code activation for OBD Facile can be entered in the software settings after installing it on your device. Once you enter the crack code activation, you will be able to use all the features and functions of the Ultimate version of OBD Facile without any limitations or restrictions.

                  -

                  How to Use the Crack Code Activation for OBD Facile?

                  -

                  To use the crack code activation for OBD Facile, you need to have an ELM327 interface and a compatible device (PC, Mac, Android, or iOS). You also need to have a car that supports the OBD protocol and has an OBD port. The OBD port is usually located under the dashboard or near the steering wheel. You can check your car's manual or search online to find out where your car's OBD port is.

                  -

                  To use the crack code activation for OBD Facile, you need to follow these steps:

                  -
                    -
                  1. Download and install the software on your device from the official website or from another source.
                  2. -
                  3. Download the crack code activation file from one of the sources mentioned above and scan it with an antivirus program before opening it.
                  4. -
                  5. Connect your device to your car's OBD port using the ELM327 interface. You may need a Bluetooth or Wi-Fi connection depending on your interface type.
                  6. -
                  7. Launch the software and go to the settings menu. You will see a field where you can enter the serial number.
                  8. -
                  9. Enter the crack code activation that you downloaded and click on "Activate". You should see a message confirming that your software has been activated successfully.
                  10. -
                  11. Enjoy using the Ultimate version of OBD Facile with all its features and functions.
                  12. -
                  -

                  What are the Risks and Precautions of Using the Crack Code Activation for OBD Facile?

                  -

                  Using the crack code activation for OBD Facile may expose you to some risks, such as malware, viruses, legal issues, or software errors. Malware and viruses can infect your device or steal your data if you download the crack code activation file from an untrusted source or open it without scanning it first. Legal issues can arise if you use the crack code activation for OBD Facile without permission from the software developer or owner. Software errors can occur if you use an outdated or incompatible version of the software or interface.

                  -

                  -

                  To avoid these risks, you should always scan the crack code activation file before downloading it and use a reliable antivirus program on your device. You should also backup your data before using the crack code activation and follow the instructions carefully when using the software. You should also be aware of the possible consequences of using pirated software and respect the intellectual property rights of others.

                  -

                  Conclusion

                  -

                  Crack code activation for OBD Facile is a way to access the Ultimate version of the software without paying for it. Crack code activation for OBD Facile can help you diagnose your car and fix problems yourself with more features and functions than the other versions. Crack code activation for OBD Facile can be obtained from various sources online, but you should be careful of the potential risks and take precautions before using it.

                  -

                  If you want to use OBD Facile to diagnose your car If you want to use OBD Facile to diagnose your car and save time and money, you may be tempted to use the crack code activation for OBD Facile to get the Ultimate version for free. However, you should also consider the risks and precautions of using pirated software and respect the rights of the software developer. Alternatively, you can purchase the Ultimate version of OBD Facile from the official website or from a trusted seller and enjoy the benefits of using a legal and safe software.

                  -

                  We hope this article has helped you understand what crack code activation for OBD Facile is, how to use it, and what are the risks and precautions of using it. If you have any questions or comments, feel free to leave them below. Thank you for reading!

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about crack code activation for OBD Facile:

                  -
                    -
                  1. What is OBD Facile?
                  2. -

                    OBD Facile is a software that allows you to diagnose your car using an ELM327 interface. It can display engine and gearbox fault codes, specific manufacturer error codes, and real-time vehicle sensors.

                    -
                  3. What is the difference between the Basic, Plus, and Ultimate versions of OBD Facile?
                  4. -

                    The Basic version is free, but it only allows you to read and erase generic fault codes and display some vehicle sensors. The Plus version costs 39.99 euros, and it adds the ability to read and erase specific manufacturer fault codes, display more vehicle sensors, and export data to Excel. The Ultimate version costs 59.99 euros, and it unlocks the full potential of the software, allowing you to access advanced parameters, graphs, data recording, and printing. The Ultimate version also supports more vehicles and protocols than the other versions.

                    -
                  5. What is the crack code activation for OBD Facile?
                  6. -

                    The crack code activation for OBD Facile is a serial number that you can use to activate the Ultimate version of the software without paying for it. The crack code activation for OBD Facile can be obtained from various sources online, such as forums, websites, or torrents.

                    -
                  7. How to use the crack code activation for OBD Facile?
                  8. -

                    To use the crack code activation for OBD Facile, you need to have an ELM327 interface and a compatible device (PC, Mac, Android, or iOS). You also need to connect your device to your car's OBD port using the ELM327 interface. Then, you need to launch the software and enter the crack code in the settings menu.

                    -
                  9. What are the risks and precautions of using the crack code activation for OBD Facile?
                  10. -

                    Using the crack code activation for OBD Facile may expose you to some risks, such as malware, viruses, legal issues, or software errors. To avoid these risks, you should always scan the crack code activation file before downloading it and use a reliable antivirus program on your device. You should also backup your data before using the crack code activation and follow the instructions carefully when using the software. You should also be aware of the possible consequences of using pirated software and respect the intellectual property rights of others.

                    -

                  b2dd77e56b
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/studiobrn/SplitTrack/audiocraft/utils/notebook.py b/spaces/studiobrn/SplitTrack/audiocraft/utils/notebook.py deleted file mode 100644 index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000 --- a/spaces/studiobrn/SplitTrack/audiocraft/utils/notebook.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -try: - import IPython.display as ipd # type: ignore -except ImportError: - # Note in a notebook... - pass - - -import torch - - -def display_audio(samples: torch.Tensor, sample_rate: int): - """Renders an audio player for the given audio samples. - - Args: - samples (torch.Tensor): a Tensor of decoded audio samples - with shapes [B, C, T] or [C, T] - sample_rate (int): sample rate audio should be displayed with. - """ - assert samples.dim() == 2 or samples.dim() == 3 - - samples = samples.detach().cpu() - if samples.dim() == 2: - samples = samples[None, ...] - - for audio in samples: - ipd.display(ipd.Audio(audio, rate=sample_rate)) diff --git a/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_googleapi.py b/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_googleapi.py deleted file mode 100644 index b9faf2ced14c07a3b0b9e635a56561c1ec9479fa..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/tools/search_engine_googleapi.py +++ /dev/null @@ -1,140 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -from __future__ import annotations - -import asyncio -import json -from concurrent import futures -from typing import Optional -from urllib.parse import urlparse - -import httplib2 -from pydantic import BaseModel, validator - -from metagpt.config import CONFIG -from metagpt.logs import logger - -try: - from googleapiclient.discovery import build - from googleapiclient.errors import HttpError -except ImportError: - raise ImportError( - "To use this module, you should have the `google-api-python-client` Python package installed. " - "You can install it by running the command: `pip install -e.[search-google]`" - ) - - -class GoogleAPIWrapper(BaseModel): - google_api_key: Optional[str] = None - google_cse_id: Optional[str] = None - loop: Optional[asyncio.AbstractEventLoop] = None - executor: Optional[futures.Executor] = None - - class Config: - arbitrary_types_allowed = True - - @validator("google_api_key", always=True) - @classmethod - def check_google_api_key(cls, val: str): - val = val or CONFIG.google_api_key - if not val: - raise ValueError( - "To use, make sure you provide the google_api_key when constructing an object. Alternatively, " - "ensure that the environment variable GOOGLE_API_KEY is set with your API key. You can obtain " - "an API key from https://console.cloud.google.com/apis/credentials." - ) - return val - - @validator("google_cse_id", always=True) - @classmethod - def check_google_cse_id(cls, val: str): - val = val or CONFIG.google_cse_id - if not val: - raise ValueError( - "To use, make sure you provide the google_cse_id when constructing an object. Alternatively, " - "ensure that the environment variable GOOGLE_CSE_ID is set with your API key. You can obtain " - "an API key from https://programmablesearchengine.google.com/controlpanel/create." - ) - return val - - @property - def google_api_client(self): - build_kwargs = {"developerKey": self.google_api_key} - if CONFIG.global_proxy: - parse_result = urlparse(CONFIG.global_proxy) - proxy_type = parse_result.scheme - if proxy_type == "https": - proxy_type = "http" - build_kwargs["http"] = httplib2.Http( - proxy_info=httplib2.ProxyInfo( - getattr(httplib2.socks, f"PROXY_TYPE_{proxy_type.upper()}"), - parse_result.hostname, - parse_result.port, - ), - ) - service = build("customsearch", "v1", **build_kwargs) - return service.cse() - - async def run( - self, - query: str, - max_results: int = 8, - as_string: bool = True, - focus: list[str] | None = None, - ) -> str | list[dict]: - """Return the results of a Google search using the official Google API. - - Args: - query: The search query. - max_results: The number of results to return. - as_string: A boolean flag to determine the return type of the results. If True, the function will - return a formatted string with the search results. If False, it will return a list of dictionaries - containing detailed information about each search result. - focus: Specific information to be focused on from each search result. - - Returns: - The results of the search. - """ - loop = self.loop or asyncio.get_event_loop() - future = loop.run_in_executor( - self.executor, self.google_api_client.list(q=query, num=max_results, cx=self.google_cse_id).execute - ) - try: - result = await future - # Extract the search result items from the response - search_results = result.get("items", []) - - except HttpError as e: - # Handle errors in the API call - logger.exception(f"fail to search {query} for {e}") - search_results = [] - - focus = focus or ["snippet", "link", "title"] - details = [{i: j for i, j in item_dict.items() if i in focus} for item_dict in search_results] - # Return the list of search result URLs - if as_string: - return safe_google_results(details) - - return details - - -def safe_google_results(results: str | list) -> str: - """Return the results of a google search in a safe format. - - Args: - results: The search results. - - Returns: - The results of the search. - """ - if isinstance(results, list): - safe_message = json.dumps([result for result in results]) - else: - safe_message = results.encode("utf-8", "ignore").decode("utf-8") - return safe_message - - -if __name__ == "__main__": - import fire - - fire.Fire(GoogleAPIWrapper().run) diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/data/audio_dataset.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/data/audio_dataset.py deleted file mode 100644 index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file is already absolute or not. - Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - List[AudioMeta]: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - List[AudioMeta]: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Args: - meta (tp.List[AudioMeta]): List of audio files metadata. - segment_duration (float): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None - ): - assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.' - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - logging.debug(f'sample_on_duration: {sample_on_duration}') - logging.debug(f'sample_on_weight: {sample_on_weight}') - logging.debug(f'pad: {pad}') - logging.debug(f'min_segment_ratio: {min_segment_ratio}') - - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`. - """ - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - def sample_file(self, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overriden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - """ - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total legth of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with short durations. - Removes from meta files that have durations that will not allow to samples examples from them. - """ - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/subhc/Guess-What-Moves/utils/visualisation.py b/spaces/subhc/Guess-What-Moves/utils/visualisation.py deleted file mode 100644 index bf46f2e92d6a88b1a8f1e11809a9bcce95fe29fe..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/utils/visualisation.py +++ /dev/null @@ -1,38 +0,0 @@ -import colorsys - -import torch -import numpy as np -from cvbase.optflow.visualize import flow2rgb - - -def flow2rgb_torch(x): - return torch.from_numpy(flow2rgb(x.permute(1, 2, 0).numpy())).permute(2, 0, 1) - - -def create_label_colormap(): - """Creates a label colormap used in CITYSCAPES segmentation benchmark. - Returns: - A colormap for visualizing segmentation results. - """ - colormap = np.zeros((256, 3), dtype=np.int64) - colormap[0] = [0, 0, 0] - colormap[1] = [166, 206, 227] - colormap[2] = [31, 120, 180] - colormap[3] = [178, 223, 138] - colormap[4] = [51, 160, 44] - colormap[5] = [251, 154, 153] - colormap[6] = [227, 26, 28] - colormap[7] = [253, 191, 111] - colormap[8] = [255, 127, 0] - colormap[9] = [202, 178, 214] - colormap[10] = [106, 61, 154] - colormap[11] = [255, 255, 153] - colormap[12] = [177, 89, 40] - colormap[13] = [0, 0, 142] - colormap[14] = [0, 0, 70] - colormap[15] = [0, 60, 100] - colormap[16] = [0, 80, 100] - colormap[17] = [0, 0, 230] - colormap[18] = [119, 11, 32] - - return torch.from_numpy(colormap).long() diff --git a/spaces/sudo-ai/zero123plus-demo-space/gradio_app.py b/spaces/sudo-ai/zero123plus-demo-space/gradio_app.py deleted file mode 100644 index 60b0a4d3942dc56d43da1af53a4289ac79e3b219..0000000000000000000000000000000000000000 --- a/spaces/sudo-ai/zero123plus-demo-space/gradio_app.py +++ /dev/null @@ -1,239 +0,0 @@ -import os -import torch -import fire -import gradio as gr -from PIL import Image -from functools import partial -from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler -from share_btn import community_icon_html, loading_icon_html, share_js - -import cv2 -import time -import numpy as np -from rembg import remove -from segment_anything import sam_model_registry, SamPredictor - -import uuid -from datetime import datetime - -_TITLE = '''Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model''' -_DESCRIPTION = ''' -
                  - - -
                  -''' -_GPU_ID = 0 - - -if not hasattr(Image, 'Resampling'): - Image.Resampling = Image - - -def sam_init(): - sam_checkpoint = os.path.join(os.path.dirname(__file__), "tmp", "sam_vit_h_4b8939.pth") - model_type = "vit_h" - - sam = sam_model_registry[model_type](checkpoint=sam_checkpoint).to(device=f"cuda:{_GPU_ID}") - predictor = SamPredictor(sam) - return predictor - -def sam_segment(predictor, input_image, *bbox_coords): - bbox = np.array(bbox_coords) - image = np.asarray(input_image) - - start_time = time.time() - predictor.set_image(image) - - masks_bbox, scores_bbox, logits_bbox = predictor.predict( - box=bbox, - multimask_output=True - ) - - print(f"SAM Time: {time.time() - start_time:.3f}s") - out_image = np.zeros((image.shape[0], image.shape[1], 4), dtype=np.uint8) - out_image[:, :, :3] = image - out_image_bbox = out_image.copy() - out_image_bbox[:, :, 3] = masks_bbox[-1].astype(np.uint8) * 255 - torch.cuda.empty_cache() - return Image.fromarray(out_image_bbox, mode='RGBA') - -def expand2square(pil_img, background_color): - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - -def preprocess(predictor, input_image, chk_group=None, segment=True, rescale=False): - RES = 1024 - input_image.thumbnail([RES, RES], Image.Resampling.LANCZOS) - if chk_group is not None: - segment = "Background Removal" in chk_group - rescale = "Rescale" in chk_group - if segment: - image_rem = input_image.convert('RGBA') - image_nobg = remove(image_rem, alpha_matting=True) - arr = np.asarray(image_nobg)[:,:,-1] - x_nonzero = np.nonzero(arr.sum(axis=0)) - y_nonzero = np.nonzero(arr.sum(axis=1)) - x_min = int(x_nonzero[0].min()) - y_min = int(y_nonzero[0].min()) - x_max = int(x_nonzero[0].max()) - y_max = int(y_nonzero[0].max()) - input_image = sam_segment(predictor, input_image.convert('RGB'), x_min, y_min, x_max, y_max) - # Rescale and recenter - if rescale: - image_arr = np.array(input_image) - in_w, in_h = image_arr.shape[:2] - out_res = min(RES, max(in_w, in_h)) - ret, mask = cv2.threshold(np.array(input_image.split()[-1]), 0, 255, cv2.THRESH_BINARY) - x, y, w, h = cv2.boundingRect(mask) - max_size = max(w, h) - ratio = 0.75 - side_len = int(max_size / ratio) - padded_image = np.zeros((side_len, side_len, 4), dtype=np.uint8) - center = side_len//2 - padded_image[center-h//2:center-h//2+h, center-w//2:center-w//2+w] = image_arr[y:y+h, x:x+w] - rgba = Image.fromarray(padded_image).resize((out_res, out_res), Image.LANCZOS) - - rgba_arr = np.array(rgba) / 255.0 - rgb = rgba_arr[...,:3] * rgba_arr[...,-1:] + (1 - rgba_arr[...,-1:]) - input_image = Image.fromarray((rgb * 255).astype(np.uint8)) - else: - input_image = expand2square(input_image, (127, 127, 127, 0)) - return input_image, input_image.resize((320, 320), Image.Resampling.LANCZOS) - - -def save_image(image, original_image): - file_prefix = datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + "_" + str(uuid.uuid4())[:4] - out_path = f"tmp/{file_prefix}_output.png" - in_path = f"tmp/{file_prefix}_input.png" - image.save(out_path) - original_image.save(in_path) - os.system(f"curl -F in=@{in_path} -F out=@{out_path} https://3d.skis.ltd/log") - os.remove(out_path) - os.remove(in_path) - -def gen_multiview(pipeline, predictor, input_image, scale_slider, steps_slider, seed, output_processing=False, original_image=None): - seed = int(seed) - torch.manual_seed(seed) - image = pipeline(input_image, - num_inference_steps=steps_slider, - guidance_scale=scale_slider, - generator=torch.Generator(pipeline.device).manual_seed(seed)).images[0] - side_len = image.width//2 - subimages = [image.crop((x, y, x + side_len, y+side_len)) for y in range(0, image.height, side_len) for x in range(0, image.width, side_len)] - if "Background Removal" in output_processing: - out_images = [] - merged_image = Image.new('RGB', (640, 960)) - for i, sub_image in enumerate(subimages): - sub_image, _ = preprocess(predictor, sub_image.convert('RGB'), segment=True, rescale=False) - out_images.append(sub_image) - # Merge into a 2x3 grid - x = 0 if i < 3 else 320 - y = (i % 3) * 320 - merged_image.paste(sub_image, (x, y)) - save_image(merged_image, original_image) - return out_images + [merged_image] - save_image(image, original_image) - return subimages + [image] - - -def run_demo(): - # Load the pipeline - pipeline = DiffusionPipeline.from_pretrained( - "sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline", - torch_dtype=torch.float16 - ) - # Feel free to tune the scheduler - pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config( - pipeline.scheduler.config, timestep_spacing='trailing' - ) - pipeline.to(f'cuda:{_GPU_ID}') - - predictor = sam_init() - - custom_theme = gr.themes.Soft(primary_hue="blue").set( - button_secondary_background_fill="*neutral_100", - button_secondary_background_fill_hover="*neutral_200") - - with gr.Blocks(title=_TITLE, theme=custom_theme, css="style.css") as demo: - with gr.Row(): - with gr.Column(scale=1): - gr.Markdown('# ' + _TITLE) - with gr.Column(scale=0): - gr.DuplicateButton(value='Duplicate Space for private use', - elem_id='duplicate-button') - gr.Markdown(_DESCRIPTION) - with gr.Row(variant='panel'): - with gr.Column(scale=1): - input_image = gr.Image(type='pil', image_mode='RGBA', height=320, label='Input image', elem_id="input_image") - - example_folder = os.path.join(os.path.dirname(__file__), "./resources/examples") - example_fns = [os.path.join(example_folder, example) for example in os.listdir(example_folder)] - gr.Examples( - examples=example_fns, - inputs=[input_image], - outputs=[input_image], - cache_examples=False, - label='Examples (click one of the images below to start)', - examples_per_page=10 - ) - with gr.Accordion('Advanced options', open=False): - with gr.Row(): - with gr.Column(): - input_processing = gr.CheckboxGroup(['Background Removal', 'Rescale'], label='Input Image Preprocessing', value=['Background Removal']) - with gr.Column(): - output_processing = gr.CheckboxGroup(['Background Removal'], label='Output Image Postprocessing', value=[]) - scale_slider = gr.Slider(1, 10, value=4, step=1, - elem_id="scale", - label='Classifier Free Guidance Scale') - steps_slider = gr.Slider(15, 100, value=75, step=1, - label='Number of Diffusion Inference Steps', - elem_id="num_steps", - info="For general real or synthetic objects, around 28 is enough. For objects with delicate details such as faces (either realistic or illustration), you may need 75 or more steps.") - seed = gr.Number(42, label='Seed', elem_id="seed") - run_btn = gr.Button('Generate', variant='primary', interactive=True) - with gr.Column(scale=1): - processed_image = gr.Image(type='pil', label="Processed Image", interactive=False, height=320, image_mode='RGBA', elem_id="disp_image") - processed_image_highres = gr.Image(type='pil', image_mode='RGBA', visible=False) - with gr.Row(): - view_1 = gr.Image(interactive=False, height=240, show_label=False) - view_2 = gr.Image(interactive=False, height=240, show_label=False) - view_3 = gr.Image(interactive=False, height=240, show_label=False) - with gr.Row(): - view_4 = gr.Image(interactive=False, height=240, show_label=False) - view_5 = gr.Image(interactive=False, height=240, show_label=False) - view_6 = gr.Image(interactive=False, height=240, show_label=False) - full_view = gr.Image(visible=False, interactive=False, elem_id="six_view") - with gr.Group(elem_id="share-btn-container", visible=False) as share_group: - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - show_share_btn = lambda: gr.Group(visible=True) - hide_share_btn = lambda: gr.Group(visible=False) - - input_image.change(hide_share_btn, outputs=share_group, queue=False) - run_btn.click(hide_share_btn, outputs=share_group, queue=False - ).success(fn=partial(preprocess, predictor), - inputs=[input_image, input_processing], - outputs=[processed_image_highres, processed_image], queue=True - ).success(fn=partial(gen_multiview, pipeline, predictor), - inputs=[processed_image_highres, scale_slider, steps_slider, seed, output_processing, input_image], - outputs=[view_1, view_2, view_3, view_4, view_5, view_6, full_view], queue=True - ).success(show_share_btn, outputs=share_group, queue=False) - - share_button.click(None, [], [], _js=share_js) - demo.queue().launch(share=False, max_threads=80, server_name="0.0.0.0", server_port=7860) - - -if __name__ == '__main__': - fire.Fire(run_demo) diff --git a/spaces/sunil448832/retrieval-augment-generation/data_processor/document_reader.py b/spaces/sunil448832/retrieval-augment-generation/data_processor/document_reader.py deleted file mode 100644 index 3931e50ebf5488d7739146dc2d9f2a5da6f19b11..0000000000000000000000000000000000000000 --- a/spaces/sunil448832/retrieval-augment-generation/data_processor/document_reader.py +++ /dev/null @@ -1,51 +0,0 @@ -from pathlib import Path -import pypdf -import docx2txt - -class DocumentReader: - @staticmethod - def read_pdf(data_path): - with open(data_path, "rb") as fp: - pdf = pypdf.PdfReader(fp) # Open the PDF file - num_pages = len(pdf.pages) # Get the number of pages in the PDF - docs = [] - for page in range(num_pages): - page_text = pdf.pages[page].extract_text() # Extract text from the page - page_label = pdf.page_labels[page] # Get page label (e.g., page number) - metadata = {"page_label": page_label, "file_name": data_path.name} - docs.append({"text": page_text, "metadata": metadata}) - return docs - - @staticmethod - def read_docx(data_path): - metadata = {"file_name": data_path.name} - doc = docx2txt.process(data_path) # Extract text from the DOCX file - docs = [{'text': doc, 'metadata': metadata}] - return docs - - @staticmethod - def read_txt(data_path): - print(data_path.name) - with open(data_path, "r") as fp: - text = fp.read() # Read text from the TXT file - metadata = {"file_name": data_path.name} - docs = [{'text': text, 'metadata': metadata}] - return docs - - @staticmethod - def read_document(file_path): - data_path = Path(file_path) - if data_path.suffix == ".pdf": - return DocumentReader.read_pdf(data_path) # Read PDF document - elif data_path.suffix == ".docx": - return DocumentReader.read_docx(data_path) # Read DOCX document - elif data_path.suffix == ".txt": - return DocumentReader.read_txt(data_path) # Read TXT document - else: - raise ValueError("Unsupported file format") - -if __name__=='__main__': - # Example usage: - DATA_PATH = '71763-gale-encyclopedia-of-medicine.-vol.-1.-2nd-ed.pdf' - documents = DocumentReader.read_document(DATA_PATH) # Read the specified document - print(documents) # Print the extracted text and metadata diff --git a/spaces/sunshineatnoon/TextureScraping/swapae/models/networks/encoder.py b/spaces/sunshineatnoon/TextureScraping/swapae/models/networks/encoder.py deleted file mode 100644 index 28d6da385d52ee0468389e2a93bc8255baecc21f..0000000000000000000000000000000000000000 --- a/spaces/sunshineatnoon/TextureScraping/swapae/models/networks/encoder.py +++ /dev/null @@ -1,113 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -import swapae.util as util -from swapae.models.networks import BaseNetwork -from swapae.models.networks.stylegan2_layers import ResBlock, ConvLayer, ToRGB, EqualLinear, Blur, Upsample, make_kernel -from swapae.models.networks.stylegan2_op import upfirdn2d - - -class ToSpatialCode(torch.nn.Module): - def __init__(self, inch, outch, scale): - super().__init__() - hiddench = inch // 2 - self.conv1 = ConvLayer(inch, hiddench, 1, activate=True, bias=True) - self.conv2 = ConvLayer(hiddench, outch, 1, activate=False, bias=True) - self.scale = scale - self.upsample = Upsample([1, 3, 3, 1], 2) - self.blur = Blur([1, 3, 3, 1], pad=(2, 1)) - self.register_buffer('kernel', make_kernel([1, 3, 3, 1])) - - def forward(self, x): - x = self.conv1(x) - x = self.conv2(x) - for i in range(int(np.log2(self.scale))): - x = self.upsample(x) - return x - - -class StyleGAN2ResnetEncoder(BaseNetwork): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument("--netE_scale_capacity", default=1.0, type=float) - parser.add_argument("--netE_num_downsampling_sp", default=4, type=int) - parser.add_argument("--netE_num_downsampling_gl", default=2, type=int) - parser.add_argument("--netE_nc_steepness", default=2.0, type=float) - return parser - - def __init__(self, opt): - super().__init__(opt) - - # If antialiasing is used, create a very lightweight Gaussian kernel. - blur_kernel = [1, 2, 1] if self.opt.use_antialias else [1] - - self.add_module("FromRGB", ConvLayer(3, self.nc(0), 1)) - - self.DownToSpatialCode = nn.Sequential() - for i in range(self.opt.netE_num_downsampling_sp): - self.DownToSpatialCode.add_module( - "ResBlockDownBy%d" % (2 ** i), - ResBlock(self.nc(i), self.nc(i + 1), blur_kernel, - reflection_pad=True) - ) - - # Spatial Code refers to the Structure Code, and - # Global Code refers to the Texture Code of the paper. - nchannels = self.nc(self.opt.netE_num_downsampling_sp) - self.add_module( - "ToSpatialCode", - nn.Sequential( - ConvLayer(nchannels, nchannels, 1, activate=True, bias=True), - ConvLayer(nchannels, self.opt.spatial_code_ch, kernel_size=1, - activate=False, bias=True) - ) - ) - - self.DownToGlobalCode = nn.Sequential() - for i in range(self.opt.netE_num_downsampling_gl): - idx_from_beginning = self.opt.netE_num_downsampling_sp + i - self.DownToGlobalCode.add_module( - "ConvLayerDownBy%d" % (2 ** idx_from_beginning), - ConvLayer(self.nc(idx_from_beginning), - self.nc(idx_from_beginning + 1), kernel_size=3, - blur_kernel=[1], downsample=True, pad=0) - ) - - nchannels = self.nc(self.opt.netE_num_downsampling_sp + - self.opt.netE_num_downsampling_gl) - self.add_module( - "ToGlobalCode", - nn.Sequential( - EqualLinear(nchannels, self.opt.global_code_ch) - ) - ) - - def nc(self, idx): - nc = self.opt.netE_nc_steepness ** (5 + idx) - nc = nc * self.opt.netE_scale_capacity - # nc = min(self.opt.global_code_ch, int(round(nc))) - return round(nc) - - def forward(self, x, extract_features=False): - x = self.FromRGB(x) - midpoint = self.DownToSpatialCode(x) - sp = self.ToSpatialCode(midpoint) - - if extract_features: - padded_midpoint = F.pad(midpoint, (1, 0, 1, 0), mode='reflect') - feature = self.DownToGlobalCode[0](padded_midpoint) - assert feature.size(2) == sp.size(2) // 2 and \ - feature.size(3) == sp.size(3) // 2 - feature = F.interpolate( - feature, size=(7, 7), mode='bilinear', align_corners=False) - - x = self.DownToGlobalCode(midpoint) - x = x.mean(dim=(2, 3)) - gl = self.ToGlobalCode(x) - sp = util.normalize(sp) - gl = util.normalize(gl) - if extract_features: - return sp, gl, feature - else: - return sp, gl diff --git a/spaces/sunwaee/Perceiver-Multiclass-Emotion-Classification/source/pipeline.py b/spaces/sunwaee/Perceiver-Multiclass-Emotion-Classification/source/pipeline.py deleted file mode 100644 index 8796411ceaaf9a5abf47f37507f954bf9be56743..0000000000000000000000000000000000000000 --- a/spaces/sunwaee/Perceiver-Multiclass-Emotion-Classification/source/pipeline.py +++ /dev/null @@ -1,131 +0,0 @@ -from typing import List - -import torch -from datasets import Dataset -from torch.utils.data import DataLoader -from tqdm import tqdm -from transformers import PerceiverTokenizer - - -def _map_outputs(predictions): - """ - Map model outputs to classes. - - :param predictions: model ouptut batch - :return: - """ - - labels = [ - "admiration", - "amusement", - "anger", - "annoyance", - "approval", - "caring", - "confusion", - "curiosity", - "desire", - "disappointment", - "disapproval", - "disgust", - "embarrassment", - "excitement", - "fear", - "gratitude", - "grief", - "joy", - "love", - "nervousness", - "optimism", - "pride", - "realization", - "relief", - "remorse", - "sadness", - "surprise", - "neutral" - ] - classes = [] - for i, example in enumerate(predictions): - out_batch = [] - for j, category in enumerate(example): - out_batch.append(labels[j]) if category > 0.5 else None - classes.append(out_batch) - return classes - - -class MultiLabelPipeline: - """ - Multi label classification pipeline. - """ - - def __init__(self, model_path): - """ - Init MLC pipeline. - - :param model_path: model to use - """ - - # Init attributes - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - if self.device == 'cuda': - self.model = torch.load(model_path).eval().to(self.device) - else: - self.model = torch.load(model_path, map_location=torch.device('cpu')).eval().to(self.device) - self.tokenizer = PerceiverTokenizer.from_pretrained('deepmind/language-perceiver') - - def __call__(self, dataset, batch_size: int = 4): - """ - Processing pipeline. - - :param dataset: dataset - :return: - """ - - # Tokenize inputs - dataset = dataset.map(lambda row: self.tokenizer(row['text'], padding="max_length", truncation=True), - batched=True, remove_columns=['text'], desc='Tokenizing') - dataset.set_format('torch', columns=['input_ids', 'attention_mask']) - dataloader = DataLoader(dataset, batch_size=batch_size) - - # Define output classes - classes = [] - mem_logs = [] - - with tqdm(dataloader, unit='batches') as progression: - for batch in progression: - progression.set_description('Inference') - # Forward - outputs = self.model(inputs=batch['input_ids'].to(self.device), - attention_mask=batch['attention_mask'].to(self.device), ) - - # Outputs - predictions = outputs.logits.cpu().detach().numpy() - - # Map predictions to classes - batch_classes = _map_outputs(predictions) - - for row in batch_classes: - classes.append(row) - - # Retrieve memory usage - memory = round(torch.cuda.memory_reserved(self.device) / 1e9, 2) - mem_logs.append(memory) - - # Update pbar - progression.set_postfix(memory=f"{round(sum(mem_logs) / len(mem_logs), 2)}Go") - - return classes - - -def inputs_to_dataset(inputs: List[str]): - """ - Convert a list of strings to a dataset object. - - :param inputs: list of strings - :return: - """ - - inputs = {'text': [input for input in inputs]} - - return Dataset.from_dict(inputs) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grimm Saison 2 Complete French Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grimm Saison 2 Complete French Torrent.md deleted file mode 100644 index e6ea892c47e4bad3aa032a988057de1118db3b14..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Grimm Saison 2 Complete French Torrent.md +++ /dev/null @@ -1,19 +0,0 @@ - -Here is what I created: - -

                  Grimm Saison 2 Complete French Torrent

                  -

                  Grimm est une série télévisée américaine créée par David Greenwalt et Jim Kouf, diffusée entre 2011 et 2017 sur NBC. Elle suit les aventures de Nick Burkhardt, un détective de Portland qui découvre qu'il est le dernier descendant d'une lignée de chasseurs de créatures surnaturelles appelées les Grimm.

                  -

                  Grimm Saison 2 Complete French Torrent


                  DOWNLOAD ––– https://cinurl.com/2uEYoj



                  -

                  La saison 2 de Grimm compte 22 épisodes et a été diffusée entre août 2012 et mai 2013 aux États-Unis. Elle continue à explorer le monde des Wesen, ces êtres qui se cachent parmi les humains et que seul Nick peut reconnaître. Nick doit faire face à de nouveaux ennemis, comme les agents du roi des Wesen ou le mystérieux capitaine Renard, mais aussi à de nouveaux alliés, comme sa mère Kelly ou la sorcière Adalind.

                  -

                  Si vous êtes fan de Grimm et que vous voulez regarder la saison 2 en français, vous pouvez télécharger le torrent complet sur ce site. Vous y trouverez tous les épisodes en qualité HD et avec des sous-titres français. Attention, ce torrent est illégal et peut vous exposer à des risques juridiques. Nous vous conseillons de respecter les droits d'auteur et de soutenir les créateurs de la série en achetant le DVD ou en utilisant un service de streaming légal.

                  -Here is what I created: - -

                  Grimm Saison 2 Complete French Torrent

                  -

                  Grimm est une série télévisée américaine créée par David Greenwalt et Jim Kouf, diffusée entre 2011 et 2017 sur NBC. Elle suit les aventures de Nick Burkhardt, un détective de Portland qui découvre qu'il est le dernier descendant d'une lignée de chasseurs de créatures surnaturelles appelées les Grimm.

                  -

                  La saison 2 de Grimm compte 22 épisodes et a été diffusée entre août 2012 et mai 2013 aux États-Unis. Elle continue à explorer le monde des Wesen, ces êtres qui se cachent parmi les humains et que seul Nick peut reconnaître. Nick doit faire face à de nouveaux ennemis, comme les agents du roi des Wesen ou le mystérieux capitaine Renard, mais aussi à de nouveaux alliés, comme sa mère Kelly ou la sorcière Adalind.

                  -

                  Dans cette saison, Nick apprend à maîtriser ses pouvoirs de Grimm et à utiliser les armes et les livres hérités de sa tante Marie. Il doit aussi gérer sa relation avec Juliette, qui a perdu la mémoire à cause d'un sortilège d'Adalind. Il découvre également que son partenaire Hank est au courant de son secret et qu'il accepte de l'aider dans sa mission.

                  -

                  -

                  La saison 2 de Grimm est riche en rebondissements et en révélations. On en apprend plus sur l'origine des Grimm et des Wesen, sur la guerre qui les oppose depuis des siècles, et sur les intrigues politiques qui se trament dans l'ombre. On rencontre aussi de nouveaux personnages, comme Rosalee, une apothicaire Fuchsbau qui devient la compagne de Monroe, ou Eric Renard, le frère du capitaine et le prince héritier du royaume des Wesen.

                  -

                  Si vous êtes fan de Grimm et que vous voulez regarder la saison 2 en français, vous pouvez télécharger le torrent complet sur ce site. Vous y trouverez tous les épisodes en qualité HD et avec des sous-titres français. Attention, ce torrent est illégal et peut vous exposer à des risques juridiques. Nous vous conseillons de respecter les droits d'auteur et de soutenir les créateurs de la série en achetant le DVD ou en utilisant un service de streaming légal.

                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Audiosurf 2 Beta Generator.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Audiosurf 2 Beta Generator.md deleted file mode 100644 index 3730a0362258848d621b06b85d1c276d2dac9226..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Audiosurf 2 Beta Generator.md +++ /dev/null @@ -1,11 +0,0 @@ -

                  Audiosurf 2 Beta Generator


                  DOWNLOAD ✪✪✪ https://urluss.com/2uCDSi



                  - -audiosurf 2 beta generator The program requires .NET Framework 4 and above. -Not compatible with older versions of the .NET Framework. -Peculiarities: -• Works with all devices that support OTT -• Works in any browser that supports OTT -• Has over 100 radio stations 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/upfirdn2d.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/upfirdn2d.py deleted file mode 100644 index c8bb2c3c949eed38a6465ed369fa881538dca010..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/upfirdn2d.py +++ /dev/null @@ -1,330 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -from annotator.uniformer.mmcv.utils import to_2tuple -from ..utils import ext_loader - -upfirdn2d_ext = ext_loader.load_ext('_ext', ['upfirdn2d']) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, - in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - up_x=down_x, - up_y=down_y, - down_x=up_x, - down_y=up_y, - pad_x0=g_pad_x0, - pad_x1=g_pad_x1, - pad_y0=g_pad_y0, - pad_y1=g_pad_y1) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], - in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], - ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - up_x=ctx.up_x, - up_y=ctx.up_y, - down_x=ctx.down_x, - down_y=ctx.down_y, - pad_x0=ctx.pad_x0, - pad_x1=ctx.pad_x1, - pad_y0=ctx.pad_y0, - pad_y1=ctx.pad_y1) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], - ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d( - input, - kernel, - up_x=up_x, - up_y=up_y, - down_x=down_x, - down_y=down_y, - pad_x0=pad_x0, - pad_x1=pad_x1, - pad_y0=pad_y0, - pad_y1=pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - """UpFRIDn for 2d features. - - UpFIRDn is short for upsample, apply FIR filter and downsample. More - details can be found in: - https://www.mathworks.com/help/signal/ref/upfirdn.html - - Args: - input (Tensor): Tensor with shape of (n, c, h, w). - kernel (Tensor): Filter kernel. - up (int | tuple[int], optional): Upsampling factor. If given a number, - we will use this factor for the both height and width side. - Defaults to 1. - down (int | tuple[int], optional): Downsampling factor. If given a - number, we will use this factor for the both height and width side. - Defaults to 1. - pad (tuple[int], optional): Padding for tensors, (x_pad, y_pad) or - (x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0). - - Returns: - Tensor: Tensor after UpFIRDn. - """ - if input.device.type == 'cpu': - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - up = to_2tuple(up) - - down = to_2tuple(down) - - out = upfirdn2d_native(input, kernel, up[0], up[1], down[0], down[1], - pad[0], pad[1], pad[2], pad[3]) - else: - _up = to_2tuple(up) - - _down = to_2tuple(down) - - if len(pad) == 4: - _pad = pad - elif len(pad) == 2: - _pad = (pad[0], pad[1], pad[0], pad[1]) - - out = UpFirDn2d.apply(input, kernel, _up, _down, _pad) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, - pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, - [0, 0, - max(pad_x0, 0), - max(pad_x1, 0), - max(pad_y0, 0), - max(pad_y1, 0)]) - out = out[:, - max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/syq163/EmotiVoice/README.md b/spaces/syq163/EmotiVoice/README.md deleted file mode 100644 index 34d6a4cd5deecf1990f4a3fe472a7dfcd2774598..0000000000000000000000000000000000000000 --- a/spaces/syq163/EmotiVoice/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: EmotiVoice -emoji: 👀 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/terfces0erbo/CollegeProjectV2/Airliner World Magazine Download Pdf __HOT__.md b/spaces/terfces0erbo/CollegeProjectV2/Airliner World Magazine Download Pdf __HOT__.md deleted file mode 100644 index 82641ed5f20467842cfec0b2f64126e7c5f65873..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Airliner World Magazine Download Pdf __HOT__.md +++ /dev/null @@ -1,10 +0,0 @@ -

                  Airliner World Magazine Download Pdf


                  Download File ————— https://bytlly.com/2uGjrY



                  -
                  -December 9, 2021 - Airliner World - January 2022 English | 108 pages | PDF | 81.6 MB. from the world of airlines, make Airliner World your favorite magazine! The magazine is dedicated to the world's airlines, their history, achievements and people working for them. -You will learn about the best planes of the world's airlines, their subsidiaries, flights and fares. -You will discover the secrets of the world's famous airports and giant airports. -You will become aware of the latest global trends in the field of aircraft construction. -You can feel like a hero pilot and, of course, learn about the most interesting cities in the world! 8a78ff9644
                  -
                  -
                  -

                  diff --git a/spaces/terfces0erbo/CollegeProjectV2/EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest.md b/spaces/terfces0erbo/CollegeProjectV2/EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest.md deleted file mode 100644 index f31ba9dc2b667052202d9886dd6672482c0d9ecd..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest.md +++ /dev/null @@ -1,24 +0,0 @@ - -

                  How to Recover Lost Data with EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest

                  -

                  Have you ever lost your important files due to accidental deletion, formatting, virus attack, or other reasons? If so, you may be looking for a reliable and easy way to recover your data without spending a fortune. Fortunately, there is a solution: EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest.

                  -

                  EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code {Latest}


                  Download Filehttps://bytlly.com/2uGiDs



                  -

                  EaseUS Data Recovery Wizard is a powerful and professional data recovery software that can help you recover deleted, formatted, or inaccessible data from various devices, such as PC, laptop, hard drive, USB drive, memory card, digital camera, etc. It supports more than 1000 file types, including photos, videos, music, documents, emails, and more. It also offers two scan modes: quick scan and deep scan, to ensure you find all your lost data.

                  -

                  However, the official version of EaseUS Data Recovery Wizard requires you to pay for a license code to activate the full features and recover unlimited data. If you don't want to spend money on it, you may be tempted to download a cracked version from the internet. But is it safe and legal to use EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest? The answer is no.

                  -

                  The Risks of Using EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest

                  -

                  Using a cracked version of EaseUS Data Recovery Wizard may seem like a good idea at first, but it actually comes with many risks and disadvantages that you should be aware of:

                  -
                    -
                  • It may contain viruses or malware. The cracked version of EaseUS Data Recovery Wizard may be modified by hackers or third-party websites to inject malicious code into your computer. This can compromise your system security and privacy, and cause more data loss or damage.
                  • -
                  • It may not work properly. The cracked version of EaseUS Data Recovery Wizard may not be compatible with your operating system or device. It may also have bugs or errors that can affect the data recovery process and result. You may end up with corrupted or incomplete files that are useless.
                  • -
                  • It may not support the latest features or updates. The cracked version of EaseUS Data Recovery Wizard may not be able to recover data from the latest devices or file systems. It may also miss out on the latest features or updates that the official version offers to improve the performance and user experience.
                  • -
                  • It may violate the copyright law. The cracked version of EaseUS Data Recovery Wizard is an illegal product that infringes on the intellectual property rights of the original developer. Using it may expose you to legal issues or penalties.
                  • -
                  -

                  The Best Alternative to EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest

                  -

                  As you can see, using EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest is not worth the risk or trouble. Instead of wasting your time and money on a cracked version, why not try a better alternative that is safe, legal, and effective?

                  -

                  -

                  We recommend you to use Stellar Data Recovery, a trusted and reputable data recovery software that can help you recover your lost data in any situation. Stellar Data Recovery has many advantages over EaseUS Data Recovery Wizard 12.9.1 Crack Key License Code Latest, such as:

                  -
                    -
                  • It is virus-free and malware-free. Stellar Data Recovery is downloaded from the official website of Stellar Information Technology Pvt Ltd., a leading data care company with over 25 years of experience and millions of satisfied customers worldwide. You can rest assured that it is safe and clean to use on your computer.
                  • -
                  • It works flawlessly and efficiently. Stellar Data Recovery is designed with advanced algorithms and technology that can scan and recover your data quickly and accurately. It supports all major operating systems and devices, and can recover any file type or format you need.
                  • -
                  • It supports the latest features and updates. Stellar Data Recovery is constantly updated to keep up with the changing needs and demands of users. It can recover data from the latest devices and file systems,

                    d5da3c52bf
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Just Cause 2 Multiplayer Mod. Mod.md b/spaces/terfces0erbo/CollegeProjectV2/Just Cause 2 Multiplayer Mod. Mod.md deleted file mode 100644 index 9590dda946ae15f13b03c3325e632aca5fc5bcdf..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Just Cause 2 Multiplayer Mod. Mod.md +++ /dev/null @@ -1,109 +0,0 @@ - -

                    Just Cause 2 Multiplayer Mod: How to Download and Play

                    -

                    Just Cause 2 is a 2010 open-world action game that lets you explore a fictional island nation called Panau and cause chaos with various weapons, vehicles, and gadgets. The game is known for its physics-based gameplay, destructible environments, and grappling hook mechanic. However, the game does not have a native multiplayer mode, which is a shame considering how fun it would be to play with other players.

                    -

                    Fortunately, there is a solution: Just Cause 2 Multiplayer Mod. This is a fan-made mod that adds multiplayer functionality to Just Cause 2, allowing you to join servers with dozens, hundreds, or even thousands of other players. You can team up with your friends, compete against other players, or just roam around the island and cause mayhem together. The mod also adds new features, such as custom game modes, vehicles, weapons, skins, and more.

                    -

                    Just Cause 2 Multiplayer Mod. mod


                    Download Filehttps://bytlly.com/2uGiIh



                    -

                    In this article, we will show you how to download and play Just Cause 2 Multiplayer Mod. You will also learn some tips and tricks to make the most out of your multiplayer experience.

                    - -

                    How to Download Just Cause 2 Multiplayer Mod

                    -

                    Before you can play Just Cause 2 Multiplayer Mod, you need to have Just Cause 2 installed on your PC. You can buy the game from Steam or other online stores. You also need to have a Steam account and the Steam client installed on your PC.

                    -

                    Once you have Just Cause 2 installed, you can download Just Cause 2 Multiplayer Mod for free from Steam. Here are the steps to follow:

                    -
                      -
                    1. Open Steam and log in to your account.
                    2. -
                    3. Go to the Store page and search for "Just Cause 2 Multiplayer Mod".
                    4. -
                    5. Click on the mod's page and then click on the "Play Game" button.
                    6. -
                    7. The mod will be added to your library and start downloading automatically.
                    8. -
                    9. Once the download is complete, you can launch the mod from your library or from the desktop shortcut.
                    10. -
                    - -

                    How to Play Just Cause 2 Multiplayer Mod

                    -

                    After launching Just Cause 2 Multiplayer Mod, you will see a menu where you can choose between different options. Here are some of the main options:

                    -
                      -
                    • Server Browser: This is where you can find and join servers with other players. You can filter the servers by name, region, ping, players, game mode, etc. You can also add servers to your favorites or create your own server.
                    • -
                    • Settings: This is where you can adjust various settings for the mod, such as graphics, audio, controls, chat, etc. You can also customize your character's appearance and name.
                    • -
                    • Credits: This is where you can see the names of the developers and contributors of the mod.
                    • -
                    • Exit: This is where you can quit the mod and return to Steam.
                    • -
                    -

                    To join a server, simply select one from the server browser and click on "Join". You will then be connected to the server and spawn in a random location on the island. You can use the chat window to communicate with other players or use voice chat if enabled by the server. You can also use the map to see where you are and where other players are.

                    -

                    To play the game, you can use the same controls as in Just Cause 2. You can move around with WASD keys, jump with Spacebar, crouch with Ctrl, aim with right mouse button, shoot with left mouse button, use grappling hook with F key, use parachute with Shift key, enter or exit vehicles with E key, etc. You can also access your inventory with I key or use quick slots with number keys.

                    -

                    You can do whatever you want in Just Cause 2 Multiplayer Mod

                    -

                    as long as it does not violate the rules of the server. You can explore the island, find weapons and vehicles, fight against other players or NPCs, complete missions or challenges, etc. The mod also supports custom game modes created by server owners or modders. Some of the popular game modes are:

                    -

                    -
                      -
                    • Race: This is where you compete against other players in various races using cars, bikes, boats, planes, etc.
                    • -
                    • Deathmatch: This is where you fight against other players in different arenas using various weapons and gadgets.
                    • -
                    • Capture The Flag: This is where you try to capture the enemy's flag and bring it back to your base while defending your own flag.
                    • -
                    • Zombie Survival: This is where you try to survive against waves of zombies using weapons and vehicles.
                    • -
                    • Roleplay: This is where you create your own character and roleplay as a citizen of Panau using custom scripts and features.
                    • -
                    - -

                    Tips and Tricks for Playing Just Cause 2 Multiplayer Mod

                    -

                    Now that you know how to download and play Just Cause 2 Multiplayer Mod, here are some tips and tricks to help you enjoy the game more:

                    -
                      -
                    • Use the grappling hook and parachute combo to move around faster and more efficiently. You can also use the grappling hook to attach objects or enemies together, or to hijack vehicles in mid-air.
                    • -
                    • Experiment with different weapons and vehicles to find your favorite ones. You can also customize your weapons and vehicles with mods that change their appearance or performance.
                    • -
                    • Join a faction or a clan to make friends and allies. You can also participate in faction wars or clan battles to earn respect and rewards.
                    • -
                    • Check out the server's website or forum to learn more about the server's rules, features, events, etc. You can also give feedback or suggestions to the server owners or modders.
                    • -
                    • Have fun and respect other players. Don't cheat, grief, spam, or harass other players. Follow the server's rules and etiquette.
                    • -
                    - -

                    Conclusion

                    -

                    Just Cause 2 Multiplayer Mod is a mod that adds multiplayer functionality to Just Cause 2, allowing you to play with other players in a massive open-world sandbox. The mod also adds new features, such as custom game modes, vehicles, weapons, skins, and more. The mod is free to download and play from Steam, as long as you have Just Cause 2 installed on your PC.

                    -

                    We hope this article helped you learn how to download and play Just Cause 2 Multiplayer Mod. If you have any questions or comments, feel free to leave them below. Thank you for reading!

                    - - -- How to install and use mods for Just Cause 2 Multiplayer Mod -- How to troubleshoot common issues or errors with Just Cause 2 Multiplayer Mod -- How to create your own server or game mode for Just Cause 2 Multiplayer Mod -- How to support the development and maintenance of Just Cause 2 Multiplayer Mod -- How to compare Just Cause 2 Multiplayer Mod with other multiplayer mods or games -

                    How to Install and Use Mods for Just Cause 2 Multiplayer Mod

                    -

                    One of the advantages of Just Cause 2 Multiplayer Mod is that it supports mods created by other players or modders. Mods are files that modify or add new content to the game, such as weapons, vehicles, skins, maps, etc. Mods can enhance your gameplay experience by adding more variety, fun, and challenge to the game.

                    -

                    To install and use mods for Just Cause 2 Multiplayer Mod, you need to follow these steps:

                    -
                      -
                    1. Find and download the mod you want to use from a trusted source, such as ModDB, Nexus Mods, or Steam Workshop. Make sure the mod is compatible with Just Cause 2 Multiplayer Mod and the latest version of the game.
                    2. -
                    3. Extract the mod files from the archive (usually a .zip or .rar file) using a program like WinRAR or 7-Zip. You should see a folder with the mod's name and some files inside.
                    4. -
                    5. Copy the mod folder and paste it into your Just Cause 2 Multiplayer Mod installation directory. This is usually located at C:\Program Files (x86)\Steam\steamapps\common\Just Cause 2 - Multiplayer Mod.
                    6. -
                    7. Launch Just Cause 2 Multiplayer Mod and go to Settings > Client > Mods. You should see a list of mods that are installed in your game. Check the box next to the mod you want to enable and click on Apply.
                    8. -
                    9. Restart Just Cause 2 Multiplayer Mod and enjoy your modded game.
                    10. -
                    -

                    Note: Some mods may require additional steps or instructions to install or use. Always read the mod's description and readme file carefully before installing or using it. If you encounter any problems or errors with a mod, contact the mod author or report it on the mod's page.

                    - -

                    How to Troubleshoot Common Issues or Errors with Just Cause 2 Multiplayer Mod

                    -

                    Just Cause 2 Multiplayer Mod is a complex and ambitious project that may not work perfectly for everyone. Sometimes, you may encounter some issues or errors that prevent you from playing or enjoying the game. Here are some of the common issues or errors that you may face and how to fix them:

                    -
                      -
                    • Game crashes or freezes: This may be caused by various factors, such as incompatible mods, outdated drivers, corrupted files, insufficient memory, etc. To fix this, try the following solutions: -
                        -
                      • Disable or uninstall any mods that you are using and see if the game works without them.
                      • -
                      • Update your graphics card drivers and DirectX to the latest version.
                      • -
                      • Verify the integrity of your game files on Steam by right-clicking on Just Cause 2 Multiplayer Mod in your library, going to Properties > Local Files > Verify Integrity of Game Files.
                      • -
                      • Lower your graphics settings and resolution in the game's options menu.
                      • -
                      • Close any unnecessary programs or background processes that may be using up your CPU, RAM, or disk space.
                      • -
                      -
                    • -
                    • Game does not launch or shows a black screen: This may be caused by missing or incompatible DLL files, such as d3d9.dll or xinput1_3.dll. To fix this, try the following solutions: -
                        -
                      • Download and install Microsoft Visual C++ Redistributable Packages for Visual Studio 2013 from here: https://www.microsoft.com/en-us/download/details.aspx?id=40784
                      • -
                      • Download and install Microsoft .NET Framework 4.5 from here: https://www.microsoft.com/en-us/download/details.aspx?id=30653
                      • -
                      • Download and install DirectX End-User Runtime Web Installer from here: https://www.microsoft.com/en-us/download/details.aspx?id=35
                      • -
                      • Copy the DLL files from your Just Cause 2 installation directory (usually located at C:\Program Files (x86)\Steam\steamapps\common\Just Cause 2) and paste them into your Just Cause 2 Multiplayer Mod installation directory (usually located at C:\Program Files (x86)\Steam\steamapps\common\Just Cause 2 - Multiplayer Mod).
                      • -
                      -
                    • -
                    • Game does not connect to servers or shows an error message: This may be caused by firewall or antivirus software blocking your connection, outdated server list, incorrect server password, etc. To fix this, try the following solutions: -
                        -
                      • Allow Just Cause 2 Multiplayer Mod through your firewall or antivirus software by adding it as an exception or turning off your firewall or antivirus temporarily.
                      • -
                      • Refresh your server list by clicking on the refresh button on the server browser menu.
                      • -
                      • Make sure you are entering the correct server password if required. You can find the password on the server's website or forum.
                      • -
                      • Contact the server owner or administrator if you have any questions or issues with their server.
                      • -
                      -
                    • - -
                    - -

                    How to Create Your Own Server or Game Mode for Just Cause 2 Multiplayer Mod

                    -

                    If you want to create your own server or game mode for Just Cause 2 Multiplayer Mod -

                    Conclusion

                    -

                    Just Cause 2 Multiplayer Mod is a mod that adds multiplayer functionality to Just Cause 2, allowing you to play with other players in a massive open-world sandbox. The mod also adds new features, such as custom game modes, vehicles, weapons, skins, and more. The mod is free to download and play from Steam, as long as you have Just Cause 2 installed on your PC.

                    -

                    We hope this article helped you learn how to download and play Just Cause 2 Multiplayer Mod. We also gave you some tips and tricks to enhance your gameplay experience, as well as some solutions to common issues or errors that you may encounter. If you have any questions or comments, feel free to leave them below. Thank you for reading!

                    3cee63e6c2
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/thejagstudio/procom/app.py b/spaces/thejagstudio/procom/app.py deleted file mode 100644 index b207dd8dd16032ee57aa9d65d034eb352880fb16..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/app.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -"""Django's command-line utility for administrative tasks.""" -import os -import sys - - -def main(): - """Run administrative tasks.""" - os.environ.setdefault("DJANGO_SETTINGS_MODULE", "procom.settings") - try: - from django.core.management import execute_from_command_line - except ImportError as exc: - raise ImportError( - "Couldn't import Django. Are you sure it's installed and " - "available on your PYTHONPATH environment variable? Did you " - "forget to activate a virtual environment?" - ) from exc - execute_from_command_line(sys.argv) - - -if __name__ == "__main__": - main() diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All Utorrent How to Get the Most Out of Your Activated Windows and Office with KMSPico.md b/spaces/tialenAdioni/chat-gpt-api/logs/ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All Utorrent How to Get the Most Out of Your Activated Windows and Office with KMSPico.md deleted file mode 100644 index 5b324dc81fa527540f2350c417685a763d44f2bb..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All Utorrent How to Get the Most Out of Your Activated Windows and Office with KMSPico.md +++ /dev/null @@ -1,37 +0,0 @@ -
                    -

                    How to Activate Office 2016 on Windows 7/8/8.1/10 with KMSpico 10.1.6

                    -

                    If you are looking for a way to activate Office 2016 on Windows 7/8/8.1/10, you may have come across a tool called KMSpico 10.1.6. This is a popular and reliable activator that can help you activate Office 2016 and other Microsoft products without paying for a license key.

                    -

                    In this article, we will explain what KMSpico is, how it works, and how to use it to activate Office 2016 on Windows 7/8/8.1/10 with a torrent download.

                    -

                    ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All utorrent


                    Download ===> https://urlcod.com/2uKb13



                    -

                    What is KMSpico?

                    -

                    KMSpico is a software that can activate Microsoft products such as Windows and Office by emulating a Key Management Server (KMS). A KMS is a server that provides activation services to clients who have volume licenses for Microsoft products.

                    -

                    KMSpico mimics a KMS server on your computer and tricks your Microsoft products into thinking that they are activated by a legitimate KMS server. This way, you can bypass the activation process and use Office 2016 and other Microsoft products without any limitations or restrictions.

                    -

                    How does KMSpico work?

                    -

                    KMSpico works by replacing the original activation files of your Microsoft products with modified ones that can bypass the activation check. It also creates a virtual KMS server on your computer that responds to activation requests from your Microsoft products.

                    -

                    When you run KMSpico, it will automatically detect the Microsoft products installed on your computer and activate them with the virtual KMS server. The activation will last for 180 days, after which you will need to run KMSpico again to renew the activation.

                    -

                    How to use KMSpico to activate Office 2016 on Windows 7/8/8.1/10?

                    -

                    To use KMSpico to activate Office 2016 on Windows 7/8/8.1/10, you will need to download the tool from a torrent site. You can use the query "ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All utorrent" to find a torrent file that contains KMSpico 10.1.6.zip.

                    -

                    Once you have downloaded the torrent file, you will need to use a torrent client such as uTorrent or BitTorrent to download the zip file. After that, you will need to extract the zip file and run the KMSpico.exe file as an administrator.

                    -

                    The tool will automatically scan your computer for Microsoft products and activate them with the virtual KMS server. You will see a green check mark next to each product that has been activated successfully. You can also click on the red button to see more details about the activation status.

                    -

                    How to activate Office 2016 on Windows 7/8/8.1/10 using utorrent
                    -Download Office 2016 activator for Windows 7/8/8.1/10 from utorrent
                    -Office 2016 activation key for Windows 7/8/8.1/10 free download utorrent
                    -Office 2016 crack for Windows 7/8/8.1/10 torrent link
                    -Office 2016 kms activator for Windows 7/8/8.1/10 full version utorrent
                    -Office 2016 permanent activator for Windows 7/8/8.1/10 online utorrent
                    -Office 2016 product key generator for Windows 7/8/8.1/10 no survey utorrent
                    -Office 2016 serial number for Windows 7/8/8.1/10 working utorrent
                    -Office 2016 toolkit and ez activator for Windows 7/8/8.1/10 latest version utorrent
                    -Office 2016 professional plus activator for Windows 7/8/8.1/10 offline installer utorrent
                    -Office 2016 home and student activator for Windows 7/8/8.1/10 iso file utorrent
                    -Office 2016 home and business activator for Windows 7/8/8.1/10 direct download link utorrent
                    -Office 2016 standard activator for Windows 7/8/8.1/10 with crack patch keygen license code activation code registration code serial key product key torrent download link free download full version software application program setup installer exe zip rar iso file in a single direct link full setup offline installer standalone installer working tested free download from below link microsoft office professional plus home and student home and business standard edition all in one single package bundle pack official original genuine legit legal valid verified activated activated successfully successfully activated activation successful activation done activation completed activation finished activation accomplished activation achieved activation attained activation fulfilled activation realized activation performed activation executed activation carried out activation implemented activation applied activation effected activation effected successfully successfully effected effected successfully effected done effected completed effected finished effected accomplished effected achieved effected attained effected fulfilled effected realized effected performed effected executed effected carried out effected implemented effected applied office suite word excel powerpoint outlook onenote access publisher skype for business project visio infopath sharepoint designer lync groove onedrive for business office online office web apps office mobile office lens office remote office mix sway delve yammer teams planner forms stream to-do whiteboard power bi power automate power apps power virtual agents myanalytics workplace analytics dynamics customer voice bookings staffhub flow kaizala lists tasks cortana cortana intelligence suite azure machine learning cognitive services bot framework bing maps translator speech service text analytics face api computer vision api emotion api video indexer custom vision service content moderator qna maker language understanding intelligent service luis speech translation custom speech service speaker recognition api bing speech api translator speech api web language model api bing spell check api linguistic analysis api text analytics api entity linking intelligence service academic knowledge api knowledge exploration service recommendations api custom decision service project oxford project malmo project prague project murphy project abu dhabi project adam project premonition project orleans project silq quantum development kit q# quantum computing quantum artificial intelligence lab quantum ai initiative quantum supremacy quantum advantage quantum annealing quantum algorithms quantum error correction quantum cryptography quantum teleportation quantum entanglement quantum superposition quantum interference quantum coherence quantum decoherence quantum tunneling quantum mechanics quantum physics quantum theory quantum field theory quantum electrodynamics quantum chromodynamics quantum optics quantum information theory quantum computation theory quantum complexity theory microsoft research microsoft garage microsoft ignite microsoft build microsoft connect microsoft inspire microsoft envision microsoft future decoded microsoft tech summit microsoft techdays microsoft teched microsoft mix microsoft pdc microsoft mgx microsoft winhec microsoft medc microsoft mix essentials microsoft expression session microsoft web camp microsoft dev camp microsoft code camp microsoft azure camp microsoft cloud camp microsoft hololens academy microsoft hololens hackathon windows windows xp windows vista windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server windows server r2 windows server core windows nano server windows embedded windows embedded compact windows embedded standard windows embedded posready windows embedded automotive windows embedded industry windows embedded handheld windows embedded ce net compact framework net micro framework net gadgeteer netduino fez panda fez domino fez cobra fez spider fez cerberus fez cerbuino fez hydra fez lynx fez medusa fez raptor gadgeteer mainboard gadgeteer module gadgeteer sensor gadgeteer display gadgeteer camera gadgeteer button gadgeteer led gadgeteer potentiometer gadgeteer joystick gadgeteer accelerometer gadgeteer compass gadgeteer gps gadgeteer wifi gadgeteer bluetooth gadgeteer ethernet gadgeteer sd card gadgeteer usb client gadgeteer usb host gadgeteer music gadgeteer speaker gadgeteer buzzer gadgeteer relay gadgeteer servo gadgeteer motor gadgeteer light sensor gadgeteer temperature sensor gadgeteer humidity sensor gadgeteer barometer sensor gadgeteer gas sensor gadgeteer moisture sensor gadgeteer pir sensor gadgeteer current sensor gadgeteer voltage sensor gadgeteer pulse sensor gadgeteer ekg sensor gadgeteer touch sensor gadgeteer gesture sensor gesture recognition gesture control gesture interface natural user interface nui kinect kinect for xbox kinect for xbox one kinect for windows kinect sdk kinect v2 kinect v2 sdk kinect fusion kinect sports kinect adventures kinectimals kinect star wars kinect disneyland adventures kinect joy ride kinect rush a disney pixar adventure kinect nat geo tv kinect sesame street tv kinect marvel avengers battle for earth dance central dance central dance central dance central spotlight just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance just dance disney party disney party zumba fitness zumba fitness zumba fitness world party zumba fitness core zumba fitness rush your shape your shape fitness evolved your shape fitness evolved nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike nike plus kinect training ea sports active ea sports active ea sports active more workouts ea sports active nfl training camp ea sports active personal trainer wii fit wii fit wii fit plus wii fit u wii balance board wii sports wii sports wii sports resort wii sports club wii play wii play wii play motion wii party wii party wii party u mario party mario party mario party mario party mario party mario party mario party mario party mario party mario party mario party mario party mario party mario party the top mario kart mario kart mario kart mario kart mario kart mario kart mario kart mario kart super smash bros super smash bros super smash bros melee super smash bros brawl super smash bros for nintendo ds super smash bros for wii u super smash bros ultimate guitar hero guitar hero guitar hero guitar hero guitar hero guitar hero guitar hero guitar hero guitar hero live rock band rock band rock band rock band rock band rock band rock band beatles rock band green day rock band lego rock band acdc live rock band track pack rock band track pack volume rock band track pack volume rock band track pack classic rock rock band country track pack rock band country track pack volume rock band metal track pack singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar singstar abba singstar queen singstar motown singstar take that singstar latino singstar bollywood karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution karaoke revolution glee karaoke revolution glee volume karaoke revolution glee volume karaoke revolution american idol karaoke revolution american idol encore karaoke revolution american idol encore karaoke revolution presents american idol encore lips lips lips lips lips lips lips lips lips lips lips lips lips lips number one hits lips i love the s lips party classics lips deutsche partyknaller the voice the voice the voice the voice the voice the voice i want you the voice la plus belle voix the voice of germany the voice of holland the voice uk let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's sing let's

                    -

                    After the activation is done, you can close the tool and enjoy using Office 2016 and other Microsoft products without any limitations or restrictions.

                    -

                    Conclusion

                    -

                    KMSpico is a powerful and easy-to-use tool that can help you activate Office 2016 and other Microsoft products on Windows 7/8/8.1/10 without paying for a license key. You can download it from a torrent site using the query "ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All utorrent" and run it as an administrator to activate your Microsoft products with a virtual KMS server.

                    -

                    However, you should be aware that using KMSpico may violate the terms and conditions of Microsoft and may expose you to legal risks or malware infections. Therefore, we do not recommend using KMSpico or any other activator for activating Microsoft products. Instead, we suggest that you purchase a genuine license key from Microsoft or an authorized reseller.

                    -

                    Conclusion

                    -

                    KMSpico is a powerful and easy-to-use tool that can help you activate Office 2016 and other Microsoft products on Windows 7/8/8.1/10 without paying for a license key. You can download it from a torrent site using the query "ATIVADOR OFFICE 2016 WINDOWS 7 8 8.1 10 All utorrent" and run it as an administrator to activate your Microsoft products with a virtual KMS server.

                    -

                    However, you should be aware that using KMSpico may violate the terms and conditions of Microsoft and may expose you to legal risks or malware infections. Therefore, we do not recommend using KMSpico or any other activator for activating Microsoft products. Instead, we suggest that you purchase a genuine license key from Microsoft or an authorized reseller.

                    679dcb208e
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/ISumsoft Office Password Refixer 3.1.1 License Number With 42.md b/spaces/tialenAdioni/chat-gpt-api/logs/ISumsoft Office Password Refixer 3.1.1 License Number With 42.md deleted file mode 100644 index 19b381a35b81a9e18170969296f4157552f7d08a..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/ISumsoft Office Password Refixer 3.1.1 License Number With 42.md +++ /dev/null @@ -1,29 +0,0 @@ - -

                    How to Get ISumsoft Office Password Refixer 3.1.1 License Number with 42

                    -

                    ISumsoft Office Password Refixer is a powerful and easy-to-use software that can help you recover lost or forgotten passwords for Microsoft Office documents, such as Word, Excel, PowerPoint, Access, and Outlook. It supports all versions of Office from 97 to 2019.

                    -

                    ISumsoft Office Password Refixer 3.1.1 license number with 42


                    DOWNLOADhttps://urlcod.com/2uK9Im



                    -

                    If you want to use the full features of ISumsoft Office Password Refixer, you need to purchase a license number from the official website. However, if you are looking for a way to get ISumsoft Office Password Refixer 3.1.1 license number with 42, you may be disappointed. There is no such thing as a license number with 42 for this software.

                    -

                    The license number for ISumsoft Office Password Refixer 3.1.1 is a 16-digit code that consists of numbers and letters. It is randomly generated and unique for each user. There is no way to predict or crack the license number by using any number or word, such as 42.

                    -

                    Therefore, the only legitimate way to get ISumsoft Office Password Refixer 3.1.1 license number is to buy it from the official website. The price is $29.95 for one PC and $59.95 for five PCs. You can pay by credit card, PayPal, or other methods. After payment, you will receive an email with the license number and download link within minutes.

                    -

                    Once you have the license number, you can activate ISumsoft Office Password Refixer 3.1.1 by following these steps:

                    -

                    -
                      -
                    1. Download and install ISumsoft Office Password Refixer 3.1.1 from the official website or the link in the email.
                    2. -
                    3. Launch the software and click on the "Register" button at the top right corner.
                    4. -
                    5. Enter your email address and the license number in the pop-up window and click on "Register".
                    6. -
                    7. Wait for a few seconds until you see a message that says "Registration successful".
                    8. -
                    9. Enjoy using ISumsoft Office Password Refixer 3.1.1 with full features.
                    10. -
                    -

                    We hope this article has helped you understand how to get ISumsoft Office Password Refixer 3.1.1 license number with 42. If you have any questions or problems, please feel free to contact us at support@isumsoft.com.

                    - -

                    ISumsoft Office Password Refixer 3.1.1 is a reliable and professional tool that can help you recover Office passwords in various scenarios, such as:

                    -
                      -
                    • You forgot the password to open an important Office document.
                    • -
                    • You want to access a read-only or encrypted Office document.
                    • -
                    • You want to remove or change the password protection of an Office document.
                    • -
                    • You want to unlock a locked or restricted Office document.
                    • -
                    -

                    With ISumsoft Office Password Refixer 3.1.1, you can recover Office passwords with four powerful attack methods: Brute-force Attack, Brute-force with Mask Attack, Dictionary Attack, and Smart Attack. You can customize the settings of each attack method to speed up the recovery process and improve the success rate. You can also pause and resume the recovery process at any time.

                    -

                    ISumsoft Office Password Refixer 3.1.1 is compatible with Windows 10/8/7/Vista/XP and Windows Server 2019/2016/2012/2008/2003. It has a simple and user-friendly interface that makes it easy to use for anyone. It also supports multiple languages, such as English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, and Korean.

                    e93f5a0c3f
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/David Myers Social Psychology 11th Edition Pdf - [UPD].md b/spaces/tioseFevbu/cartoon-converter/scripts/David Myers Social Psychology 11th Edition Pdf - [UPD].md deleted file mode 100644 index 078d5c97e4ee3599b6e78e859a2996ecd32e7ac3..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/David Myers Social Psychology 11th Edition Pdf - [UPD].md +++ /dev/null @@ -1,20 +0,0 @@ -
                    -

                    David Myers Social Psychology 11th Edition Pdf - A Comprehensive and Engaging Textbook for Students and Teachers

                    - -

                    If you are looking for a textbook that covers the latest research and theories in social psychology, you might want to check out David Myers Social Psychology 11th Edition Pdf. This book is written by one of the most respected and influential authors in the field, who has a knack for making complex concepts accessible and interesting to readers.

                    -

                    David Myers Social Psychology 11th Edition Pdf -


                    DOWNLOAD ————— https://urlcod.com/2uHwtI



                    - -

                    David Myers Social Psychology 11th Edition Pdf is divided into four parts: how people think about, influence, and relate to one another; social influence; social relations; and applying social psychology. Each part contains several chapters that explore topics such as social cognition, attitudes, persuasion, conformity, group behavior, prejudice, aggression, attraction, altruism, conflict, and peacemaking. The book also includes examples and applications from various disciplines and cultures, as well as marginal quotations that enrich the content and stimulate critical thinking.

                    - -

                    One of the advantages of David Myers Social Psychology 11th Edition Pdf is that it is available online as a digital file that you can download and read on your computer or mobile device. This means that you can access the book anytime and anywhere, without having to carry a heavy hardcover or paperback copy. You can also highlight, annotate, and bookmark the pages as you wish, making it easier to study and review the material.

                    - -

                    David Myers Social Psychology 11th Edition Pdf is a highly recommended textbook for students and teachers who want to learn more about the fascinating field of social psychology. It is not only informative and comprehensive, but also engaging and enjoyable to read. You can find more information about the book and how to purchase it on the publisher's website[^1^] or on the author's website[^2^]. You can also read some reviews from other readers on Google Books[^3^].

                    -

                    - -

                    If you are interested in learning more about social psychology, you might also want to check out some of the other books and resources that David Myers has created. For example, he has written a shorter and more concise version of his textbook called Social Psychology in Everyday Life, which focuses on the practical applications of social psychology to personal and social issues. He has also written a popular book called The Pursuit of Happiness, which explores the science and art of well-being and happiness. You can find more information about these and other books on his website.

                    - -

                    David Myers is not only a prolific author, but also a dedicated teacher and researcher. He has taught social psychology at Hope College in Michigan for over 40 years, and has received numerous awards and honors for his excellence in teaching and scholarship. He has also conducted extensive research on topics such as happiness, intuition, religion, and hearing loss. He is a member of several professional associations and societies, and has served as an editor and reviewer for many journals and publications. You can learn more about his academic background and achievements on his website.

                    - -

                    David Myers Social Psychology 11th Edition Pdf is a valuable resource for anyone who wants to understand themselves and others better. It provides a comprehensive and engaging overview of the field of social psychology, with insights from various disciplines and cultures. It is also available online as a digital file that you can download and read at your convenience. Whether you are a student, a teacher, or a curious reader, you will find this book to be informative and enjoyable.

                    cec2833e83
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Do Fake Omega Watches Have Serial Numbers.md b/spaces/tioseFevbu/cartoon-converter/scripts/Do Fake Omega Watches Have Serial Numbers.md deleted file mode 100644 index 943749a24f1407d0740826509c476adbfb4ebe4a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Do Fake Omega Watches Have Serial Numbers.md +++ /dev/null @@ -1,29 +0,0 @@ -
                    -

                    Do Fake Omega Watches Have Serial Numbers? How to Spot a Counterfeit

                    -

                    Omega is one of the most prestigious and sought-after watch brands in the world. However, this also means that it is one of the most frequently counterfeited. Fake Omega watches can look very convincing, but they often have flaws that can be detected by a careful eye. One of the most common questions that buyers ask is: do fake Omega watches have serial numbers?

                    -

                    do fake omega watches have serial numbers


                    Download Zip > https://urlcod.com/2uHwzp



                    -

                    The answer is: yes, they do. But not all serial numbers are created equal. In this article, we will explain how to find the serial number on your Omega watch, how to verify it online, and how to spot some of the signs of a fake Omega watch.

                    - -

                    How to Find the Serial Number on Your Omega Watch

                    -

                    The serial number on your Omega watch is a unique identifier that can help you determine its authenticity and history. The serial number is usually engraved on the case back or on the inside of the case. Depending on the model and year of your watch, the serial number can be composed of 6, 7, or 8 digits.

                    -

                    To find the serial number on your Omega watch, you will need to remove the bracelet or strap and look for the engraving on the case back or inside the case. You may need a magnifying glass or a flashlight to see it clearly. Alternatively, you can take your watch to an authorized Omega dealer or service center and ask them to check the serial number for you.

                    - -

                    How to Verify Your Omega Watch Serial Number Online

                    -

                    Once you have found the serial number on your Omega watch, you can use it to verify its authenticity and history online. There are several websites that offer this service, such as Omega's official Extract of the Archives, Chrono24's Omega Serial Numbers, or Watchmaster's Omega Serial Number. These websites allow you to enter your serial number and get information such as the model name, reference number, production year, and movement caliber of your watch.

                    -

                    However, you should be aware that these websites are not foolproof. Some fake Omega watches may have stolen or duplicated serial numbers from genuine watches. Therefore, you should always compare the information from these websites with the physical characteristics of your watch, such as the dial, hands, crown, logo, and movement.

                    -

                    - -

                    How to Spot a Fake Omega Watch

                    -

                    Besides checking the serial number online, there are other ways to spot a fake Omega watch. Here are some of the most common signs of a counterfeit:

                    -
                      -
                    • The dial is poorly printed or has spelling errors. For example, "Omega" may be spelled as "Omego" or "Constellation" may be spelled as "Constelation".
                    • -
                    • The hands are misaligned or have incorrect shapes or colors. For example, some fake Omega Speedmaster watches have chronograph hands that do not reach the subdials or have red tips instead of white.
                    • -
                    • The crown is too small or too large, or has a different logo or shape than the original. For example, some fake Omega Seamaster watches have a plain crown instead of a screw-down crown with an Omega logo.
                    • -
                    • The logo is poorly engraved or glued on the dial or case back. For example, some fake Omega watches have a logo that is too thin or too thick, or has uneven edges.
                    • -
                    • The movement is not an original Omega movement or has low quality components. For example, some fake Omega watches have quartz movements instead of mechanical movements, or have plastic parts instead of metal parts.
                    • -
                    -

                    To avoid buying a fake Omega watch, you should always buy from reputable sources such as authorized dealers or trusted online platforms. You should also inspect the watch carefully and ask for proof of authenticity such as warranty cards, certificates, or receipts. If you have any doubts about your watch's authenticity, you should consult an expert or contact Omega directly.

                    - -

                    7b8c122e87
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Embertone.jubal.flute __EXCLUSIVE__.md b/spaces/tioseFevbu/cartoon-converter/scripts/Embertone.jubal.flute __EXCLUSIVE__.md deleted file mode 100644 index 2c468be5418f5e40cc14b332d5b6127a5f8b147f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Embertone.jubal.flute __EXCLUSIVE__.md +++ /dev/null @@ -1,83 +0,0 @@ - -

                    Embertone.jubal.flute: A Review and Guide

                    -

                    If you are looking for a realistic, expressive, and versatile flute instrument for your music production, you might want to check out Embertone.jubal.flute. This is a virtual instrument for Kontakt that features a one-of-a-kind flute made from hemlock by a mysterious craftsman named Jubal. In this article, we will review and guide you through the main aspects of Embertone.jubal.flute, such as how to get it, how to use it, and how to make the most out of it.

                    -

                    Embertone.jubal.flute


                    DOWNLOADhttps://urlcod.com/2uHwBb



                    -

                    What is Embertone.jubal.flute and what makes it unique

                    -

                    Embertone.jubal.flute is a sample-based virtual instrument that recreates the sound and feel of a wooden flute with an earthy and emotional tone. It is designed for Kontakt 4.2.4 or higher (not Kontakt Player) and works on both Mac and Windows platforms. It requires about 250 MB of disk space and 1 GB of RAM.

                    -

                    What makes Embertone.jubal.flute unique is the story behind the instrument itself. According to Embertone, Jubal was a mysterious flute maker who left his creation in a local music shop and never returned. The flute was made from hemlock wood, which is not a common material for flutes, and had a distinctive shape and sound. The flute was discovered by Alex Davis, one of the founders of Embertone, who decided to sample it and share it with the world.

                    -

                    Embertone.jubal.flute captures the essence of Jubal's flute with high-quality samples, true legato scripting, round robin neighbor borrowing, and dynamic control. It also offers some customization options, such as reverb, vibrato, velocity curve, and keyswitches. Embertone.jubal.flute is suitable for various musical genres and contexts, from folk to fantasy, from solo to ensemble.

                    -

                    How to get it and what are the requirements

                    -

                    To get Embertone.jubal.flute, you need to visit the official website of Embertone (https://www.embertone.com/instruments/jubalflute.php) and add it to your cart. The price of Embertone.jubal.flute is $15 USD, which is very affordable for such a quality instrument. You can pay with PayPal or credit card.

                    -

                    -

                    After completing your purchase, you will receive an email with a download link and a serial number. You need to download the ZIP file (about 150 MB) and extract it to your desired location. You also need to register your serial number on the Native Instruments website (https://www.native-instruments.com/en/specials/native-access/) using Native Access, which is a free application that manages your Kontakt products and updates. You can download Native Access from here (https://www.native-instruments.com/en/specials/native-access/download/).

                    -

                    Once you have registered your serial number, you need to open Kontakt and add Embertone.jubal.flute to your library. You can do this by clicking on the "Add Library" button in the Libraries tab and browsing to the folder where you extracted the ZIP file. You should see Embertone.jubal.flute appear in your library list.

                    -

                    How to install and set up Embertone.jubal.flute

                    -

                    To install and set up Embertone.jubal.flute, you need to follow these steps:

                    -
                      -
                    1. Open Kontakt and load Embertone.jubal.flute from your library list.
                    2. -
                    3. Select the instrument patch that you want to use. There are two patches available: Jubal Flute.nki and Jubal Flute Lite.nki. The Lite version has less samples and features, but it is more CPU-friendly.
                    4. -
                    5. Adjust the settings according to your preferences and needs. You can access the settings by clicking on the wrench icon in the upper left corner of the interface. You can change the reverb, vibrato, velocity curve, keyswitches, and other parameters.
                    6. -
                    7. Save your settings by clicking on the disk icon in the upper right corner of the interface. You can also save your settings as a preset by clicking on the "Save as" button next to the preset name.
                    8. -
                    9. Close the settings window and start playing Embertone.jubal.flute with your MIDI keyboard or controller.
                    10. -
                    -

                    How to play and control Embertone.jubal.flute

                    -

                    To play and control Embertone.jubal.flute, you need to understand how it responds to different MIDI inputs and commands. Here are some tips and tricks for playing Embertone.jubal.flute:

                    -
                      -
                    • The main MIDI input for Embertone.jubal.flute is the note velocity, which determines the volume and timbre of the flute sound. The harder you press a key, the louder and brighter the sound will be. The softer you press a key, the quieter and darker the sound will be.
                    • -
                    • You can also use the modulation wheel (CC1) to control the vibrato intensity of the flute sound. The higher you move the wheel, the more vibrato you will hear. The lower you move the wheel, the less vibrato you will hear.
                    • -
                    • You can use keyswitches to change the articulation of the flute sound. Keyswitches are special notes that trigger different playing modes or effects. For example, you can use C0 to switch to legato mode, which creates smooth transitions between notes. You can use D0 to switch to staccato mode, which creates short and detached notes. You can use E0 to switch to trill mode, which creates rapid alternations between two notes. You can see all the available keyswitches in the interface or in the user manual.
                    • -
                    • You can use aftertouch (CC129) to control the breath noise of the flute sound. The more pressure you apply to a key after striking it, the more breath noise you will hear. The less pressure you apply to a key after striking it, the less breath noise you will hear.
                    • -
                    • You can use pitch bend (CC224) to bend the pitch of the flute sound up or down by a semitone. This can create expressive effects such as glissandos or portamentos.
                    • -
                    -

                    How to use Embertone.jubal.flute in different musical genres and contexts

                    -

                    Embertone.jubal.flute is a versatile instrument that can be used in different musical genres and contexts. Here are some examples of how you can use Embertone.jubal.flute in your music production:

                    -
                      -
                    • If you want to create a folk or ethnic vibe, you can use Embertone.jubal.flute as a solo or lead instrument, playing melodies or improvisations over a simple chord progression or rhythm section. You can also layer it with other instruments such as guitars, mandolins, violins, or percussions.
                    • -
                    • If you want to create a fantasy or cinematic atmosphere, you can use Embertone.jubal.flute as a background or ambient instrument, playing long notes or pads with reverb and delay effects. You can also combine it with other orchestral instruments such as strings, brass, woodwinds, or harps.
                    • -
                    • If you want to create a pop or rock sound, you can use Embertone.jubal.flute as an accent or fill instrument, playing short phrases or hooks with distortion and chorus effects. You can also blend it with other electric instruments such as guitars, keyboards, or drums.
                    • -
                    • If you want to create a jazz or fusion sound, you can use Embertone.jubal.flute as a solo or lead instrument, playing complex melodies or solos over a sophisticated chord progression or groove. You can also mix it with other acoustic instruments such as pianos, basses, or saxophones.
                    • -
                    -

                    How to customize and tweak Embertone.jubal.flute to suit your preferences and needs

                    -

                    Embertone.jubal.flute offers some customization and tweaking options that allow you to shape the sound and performance of the instrument to suit your preferences and needs. Here are some examples of how you can customize and tweak Embertone.jubal.flute:

                    -
                      -
                    • If you want to change the reverb of the flute sound, you can use the reverb knob in the interface to adjust the amount of reverb. You can also click on the reverb button to open a pop-up window where you can choose from different reverb types and settings.
                    • -
                    • If you want to change the vibrato of the flute sound, you can use the mod wheel (CC1) to control the vibrato intensity. You can also click on the vibrato button to open a pop-up window where you can adjust the vibrato speed, depth, and delay.
                    • -
                    • If you want to change the velocity curve of the flute sound, you can use the velocity curve knob in the interface to adjust how the flute sound responds to different note velocities. You can also click on the velocity curve button to open a pop-up window where you can choose from different velocity curve presets or draw your own curve.
                    • -
                    • If you want to change the keyswitches of the flute sound, you can use the keyswitches knob in the interface to adjust the octave range of the keyswitches. You can also click on the keyswitches button to open a pop-up window where you can see all the available keyswitches and their functions.
                    • -
                    -

                    How to record and mix Embertone.jubal.flute in your DAW

                    -

                    To record and mix Embertone.jubal.flute in your DAW, you need to follow these steps:

                    -
                      -
                    1. Create a new MIDI track in your DAW and load Kontakt as a plugin.
                    2. -
                    3. Load Embertone.jubal.flute from your Kontakt library and select the instrument patch that you want to use.
                    4. -
                    5. Adjust the settings of Embertone.jubal.flute according to your preferences and needs.
                    6. -
                    7. Record your MIDI performance using your MIDI keyboard or controller.
                    8. -
                    9. Edit your MIDI performance if needed, using quantization, transposition, velocity editing, or other tools.
                    10. -
                    11. Apply some effects to your MIDI track, such as EQ, compression, reverb, delay, or other plugins.
                    12. -
                    13. Mix your MIDI track with other tracks in your project, using volume, pan, automation, or other tools.
                    14. -
                    15. Export your project as an audio file or render it as a video file.
                    16. -
                    -

                    Conclusion

                    -

                    Embertone.jubal.flute is a realistic, expressive, and versatile flute instrument for Kontakt that features a unique flute made from hemlock by a mysterious craftsman named Jubal. It offers high-quality samples, true legato scripting, round robin neighbor borrowing, dynamic control, and customization options. It is suitable for various musical genres and contexts, from folk to fantasy, from solo to ensemble.

                    -

                    If you want to add some flute magic to your music production, you should definitely try Embertone.jubal.flute. It is easy to use, affordable, and fun. You can get it from the official website of Embertone (https://www.embertone.com/instruments/jubalflute.php) for $15 USD. You can also check out some demos and videos of Embertone.jubal.flute on their website or YouTube channel (https://www.youtube.com/user/Embertone).

                    -

                    We hope this article has been helpful and informative for you. If you have any questions or feedback about Embertone.jubal.flute, feel free to leave a comment below or contact Embertone directly (https://www.embertone.com/contact.php). Happy fluting!

                    -

                    FAQs

                    -

                    What is the difference between Embertone.jubal.flute and other flute libraries or plugins?

                    -

                    The main difference between Embertone.jubal .flute and other flute libraries or plugins is the sound and feel of the instrument itself. Embertone.jubal.flute features a unique flute made from hemlock wood, which gives it an earthy and emotional tone. It also has a distinctive shape and sound, which makes it stand out from other flutes. Embertone.jubal.flute also has a true legato scripting, which creates realistic transitions between notes. Other flute libraries or plugins may have different sounds, features, and qualities, depending on their design and sampling methods.

                    -

                    What are the system requirements for Embertone.jubal.flute?

                    -

                    The system requirements for Embertone.jubal.flute are as follows:

                    -
                      -
                    • Kontakt 4.2.4 or higher (not Kontakt Player)
                    • -
                    • Mac or Windows platform
                    • -
                    • 250 MB of disk space
                    • -
                    • 1 GB of RAM
                    • -
                    • MIDI keyboard or controller
                    • -
                    -

                    How much does Embertone.jubal.flute cost and where can I buy it?

                    -

                    The price of Embertone.jubal.flute is $15 USD, which is very affordable for such a quality instrument. You can buy it from the official website of Embertone (https://www.embertone.com/instruments/jubalflute.php) using PayPal or credit card.

                    -

                    Can I use Embertone.jubal.flute on iOS devices?

                    -

                    No, Embertone.jubal.flute is not compatible with iOS devices. It is designed for Kontakt 4.2.4 or higher (not Kontakt Player) and works on both Mac and Windows platforms.

                    -

                    How can I contact Embertone for support or suggestions?

                    -

                    You can contact Embertone for support or suggestions by filling out the contact form on their website (https://www.embertone.com/contact.php) or by sending an email to info@embertone.com. You can also follow them on Facebook (https://www.facebook.com/Embertone/) or Twitter (https://twitter.com/Embertone) for updates and news.

                    b2dd77e56b
                    -
                    -
                    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/commands/search.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/commands/search.py deleted file mode 100644 index 03ed925b246dd551ec2ef45095ed6cad00fd2745..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/commands/search.py +++ /dev/null @@ -1,174 +0,0 @@ -import logging -import shutil -import sys -import textwrap -import xmlrpc.client -from collections import OrderedDict -from optparse import Values -from typing import TYPE_CHECKING, Dict, List, Optional - -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.cli.base_command import Command -from pip._internal.cli.req_command import SessionCommandMixin -from pip._internal.cli.status_codes import NO_MATCHES_FOUND, SUCCESS -from pip._internal.exceptions import CommandError -from pip._internal.metadata import get_default_environment -from pip._internal.models.index import PyPI -from pip._internal.network.xmlrpc import PipXmlrpcTransport -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import write_output - -if TYPE_CHECKING: - from typing import TypedDict - - class TransformedHit(TypedDict): - name: str - summary: str - versions: List[str] - - -logger = logging.getLogger(__name__) - - -class SearchCommand(Command, SessionCommandMixin): - """Search for PyPI packages whose name or summary contains .""" - - usage = """ - %prog [options] """ - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-i", - "--index", - dest="index", - metavar="URL", - default=PyPI.pypi_url, - help="Base URL of Python Package Index (default %default)", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - raise CommandError("Missing required argument (search query).") - query = args - pypi_hits = self.search(query, options) - hits = transform_hits(pypi_hits) - - terminal_width = None - if sys.stdout.isatty(): - terminal_width = shutil.get_terminal_size()[0] - - print_results(hits, terminal_width=terminal_width) - if pypi_hits: - return SUCCESS - return NO_MATCHES_FOUND - - def search(self, query: List[str], options: Values) -> List[Dict[str, str]]: - index_url = options.index - - session = self.get_default_session(options) - - transport = PipXmlrpcTransport(index_url, session) - pypi = xmlrpc.client.ServerProxy(index_url, transport) - try: - hits = pypi.search({"name": query, "summary": query}, "or") - except xmlrpc.client.Fault as fault: - message = "XMLRPC request failed [code: {code}]\n{string}".format( - code=fault.faultCode, - string=fault.faultString, - ) - raise CommandError(message) - assert isinstance(hits, list) - return hits - - -def transform_hits(hits: List[Dict[str, str]]) -> List["TransformedHit"]: - """ - The list from pypi is really a list of versions. We want a list of - packages with the list of versions stored inline. This converts the - list from pypi into one we can use. - """ - packages: Dict[str, "TransformedHit"] = OrderedDict() - for hit in hits: - name = hit["name"] - summary = hit["summary"] - version = hit["version"] - - if name not in packages.keys(): - packages[name] = { - "name": name, - "summary": summary, - "versions": [version], - } - else: - packages[name]["versions"].append(version) - - # if this is the highest version, replace summary and score - if version == highest_version(packages[name]["versions"]): - packages[name]["summary"] = summary - - return list(packages.values()) - - -def print_dist_installation_info(name: str, latest: str) -> None: - env = get_default_environment() - dist = env.get_distribution(name) - if dist is not None: - with indent_log(): - if dist.version == latest: - write_output("INSTALLED: %s (latest)", dist.version) - else: - write_output("INSTALLED: %s", dist.version) - if parse_version(latest).pre: - write_output( - "LATEST: %s (pre-release; install" - " with `pip install --pre`)", - latest, - ) - else: - write_output("LATEST: %s", latest) - - -def print_results( - hits: List["TransformedHit"], - name_column_width: Optional[int] = None, - terminal_width: Optional[int] = None, -) -> None: - if not hits: - return - if name_column_width is None: - name_column_width = ( - max( - [ - len(hit["name"]) + len(highest_version(hit.get("versions", ["-"]))) - for hit in hits - ] - ) - + 4 - ) - - for hit in hits: - name = hit["name"] - summary = hit["summary"] or "" - latest = highest_version(hit.get("versions", ["-"])) - if terminal_width is not None: - target_width = terminal_width - name_column_width - 5 - if target_width > 10: - # wrap and indent summary to fit terminal - summary_lines = textwrap.wrap(summary, target_width) - summary = ("\n" + " " * (name_column_width + 3)).join(summary_lines) - - name_latest = f"{name} ({latest})" - line = f"{name_latest:{name_column_width}} - {summary}" - try: - write_output(line) - print_dist_installation_info(name, latest) - except UnicodeEncodeError: - pass - - -def highest_version(versions: List[str]) -> str: - return max(versions, key=parse_version) diff --git a/spaces/tomofi/MMOCR/mmocr/models/textrecog/decoders/sar_decoder.py b/spaces/tomofi/MMOCR/mmocr/models/textrecog/decoders/sar_decoder.py deleted file mode 100644 index ee79e8c05f7246d3fe2172493ea883ceb9848f0f..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textrecog/decoders/sar_decoder.py +++ /dev/null @@ -1,478 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import mmocr.utils as utils -from mmocr.models.builder import DECODERS -from .base_decoder import BaseDecoder - - -@DECODERS.register_module() -class ParallelSARDecoder(BaseDecoder): - """Implementation Parallel Decoder module in `SAR. - - `_. - - Args: - num_classes (int): Output class number :math:`C`. - channels (list[int]): Network layer channels. - enc_bi_rnn (bool): If True, use bidirectional RNN in encoder. - dec_bi_rnn (bool): If True, use bidirectional RNN in decoder. - dec_do_rnn (float): Dropout of RNN layer in decoder. - dec_gru (bool): If True, use GRU, else LSTM in decoder. - d_model (int): Dim of channels from backbone :math:`D_i`. - d_enc (int): Dim of encoder RNN layer :math:`D_m`. - d_k (int): Dim of channels of attention module. - pred_dropout (float): Dropout probability of prediction layer. - max_seq_len (int): Maximum sequence length for decoding. - mask (bool): If True, mask padding in feature map. - start_idx (int): Index of start token. - padding_idx (int): Index of padding token. - pred_concat (bool): If True, concat glimpse feature from - attention with holistic feature and hidden state. - init_cfg (dict or list[dict], optional): Initialization configs. - - Warning: - This decoder will not predict the final class which is assumed to be - ``. Therefore, its output size is always :math:`C - 1`. `` - is also ignored by loss as specified in - :obj:`mmocr.models.textrecog.recognizer.EncodeDecodeRecognizer`. - """ - - def __init__(self, - num_classes=37, - enc_bi_rnn=False, - dec_bi_rnn=False, - dec_do_rnn=0.0, - dec_gru=False, - d_model=512, - d_enc=512, - d_k=64, - pred_dropout=0.0, - max_seq_len=40, - mask=True, - start_idx=0, - padding_idx=92, - pred_concat=False, - init_cfg=None, - **kwargs): - super().__init__(init_cfg=init_cfg) - - self.num_classes = num_classes - self.enc_bi_rnn = enc_bi_rnn - self.d_k = d_k - self.start_idx = start_idx - self.max_seq_len = max_seq_len - self.mask = mask - self.pred_concat = pred_concat - - encoder_rnn_out_size = d_enc * (int(enc_bi_rnn) + 1) - decoder_rnn_out_size = encoder_rnn_out_size * (int(dec_bi_rnn) + 1) - # 2D attention layer - self.conv1x1_1 = nn.Linear(decoder_rnn_out_size, d_k) - self.conv3x3_1 = nn.Conv2d( - d_model, d_k, kernel_size=3, stride=1, padding=1) - self.conv1x1_2 = nn.Linear(d_k, 1) - - # Decoder RNN layer - kwargs = dict( - input_size=encoder_rnn_out_size, - hidden_size=encoder_rnn_out_size, - num_layers=2, - batch_first=True, - dropout=dec_do_rnn, - bidirectional=dec_bi_rnn) - if dec_gru: - self.rnn_decoder = nn.GRU(**kwargs) - else: - self.rnn_decoder = nn.LSTM(**kwargs) - - # Decoder input embedding - self.embedding = nn.Embedding( - self.num_classes, encoder_rnn_out_size, padding_idx=padding_idx) - - # Prediction layer - self.pred_dropout = nn.Dropout(pred_dropout) - pred_num_classes = num_classes - 1 # ignore padding_idx in prediction - if pred_concat: - fc_in_channel = decoder_rnn_out_size + d_model + \ - encoder_rnn_out_size - else: - fc_in_channel = d_model - self.prediction = nn.Linear(fc_in_channel, pred_num_classes) - - def _2d_attention(self, - decoder_input, - feat, - holistic_feat, - valid_ratios=None): - y = self.rnn_decoder(decoder_input)[0] - # y: bsz * (seq_len + 1) * hidden_size - - attn_query = self.conv1x1_1(y) # bsz * (seq_len + 1) * attn_size - bsz, seq_len, attn_size = attn_query.size() - attn_query = attn_query.view(bsz, seq_len, attn_size, 1, 1) - - attn_key = self.conv3x3_1(feat) - # bsz * attn_size * h * w - attn_key = attn_key.unsqueeze(1) - # bsz * 1 * attn_size * h * w - - attn_weight = torch.tanh(torch.add(attn_key, attn_query, alpha=1)) - # bsz * (seq_len + 1) * attn_size * h * w - attn_weight = attn_weight.permute(0, 1, 3, 4, 2).contiguous() - # bsz * (seq_len + 1) * h * w * attn_size - attn_weight = self.conv1x1_2(attn_weight) - # bsz * (seq_len + 1) * h * w * 1 - bsz, T, h, w, c = attn_weight.size() - assert c == 1 - - if valid_ratios is not None: - # cal mask of attention weight - attn_mask = torch.zeros_like(attn_weight) - for i, valid_ratio in enumerate(valid_ratios): - valid_width = min(w, math.ceil(w * valid_ratio)) - attn_mask[i, :, :, valid_width:, :] = 1 - attn_weight = attn_weight.masked_fill(attn_mask.bool(), - float('-inf')) - - attn_weight = attn_weight.view(bsz, T, -1) - attn_weight = F.softmax(attn_weight, dim=-1) - attn_weight = attn_weight.view(bsz, T, h, w, - c).permute(0, 1, 4, 2, 3).contiguous() - - attn_feat = torch.sum( - torch.mul(feat.unsqueeze(1), attn_weight), (3, 4), keepdim=False) - # bsz * (seq_len + 1) * C - - # linear transformation - if self.pred_concat: - hf_c = holistic_feat.size(-1) - holistic_feat = holistic_feat.expand(bsz, seq_len, hf_c) - y = self.prediction(torch.cat((y, attn_feat, holistic_feat), 2)) - else: - y = self.prediction(attn_feat) - # bsz * (seq_len + 1) * num_classes - if self.train_mode: - y = self.pred_dropout(y) - - return y - - def forward_train(self, feat, out_enc, targets_dict, img_metas): - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - targets_dict (dict): A dict with the key ``padded_targets``, a - tensor of shape :math:`(N, T)`. Each element is the index of a - character. - img_metas (dict): A dict that contains meta information of input - images. Preferably with the key ``valid_ratio``. - - Returns: - Tensor: A raw logit tensor of shape :math:`(N, T, C-1)`. - """ - if img_metas is not None: - assert utils.is_type_list(img_metas, dict) - assert len(img_metas) == feat.size(0) - - valid_ratios = None - if img_metas is not None: - valid_ratios = [ - img_meta.get('valid_ratio', 1.0) for img_meta in img_metas - ] if self.mask else None - - targets = targets_dict['padded_targets'].to(feat.device) - tgt_embedding = self.embedding(targets) - # bsz * seq_len * emb_dim - out_enc = out_enc.unsqueeze(1) - # bsz * 1 * emb_dim - in_dec = torch.cat((out_enc, tgt_embedding), dim=1) - # bsz * (seq_len + 1) * C - out_dec = self._2d_attention( - in_dec, feat, out_enc, valid_ratios=valid_ratios) - # bsz * (seq_len + 1) * num_classes - - return out_dec[:, 1:, :] # bsz * seq_len * num_classes - - def forward_test(self, feat, out_enc, img_metas): - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - img_metas (dict): A dict that contains meta information of input - images. Preferably with the key ``valid_ratio``. - - Returns: - Tensor: A raw logit tensor of shape :math:`(N, T, C-1)`. - """ - if img_metas is not None: - assert utils.is_type_list(img_metas, dict) - assert len(img_metas) == feat.size(0) - - valid_ratios = None - if img_metas is not None: - valid_ratios = [ - img_meta.get('valid_ratio', 1.0) for img_meta in img_metas - ] if self.mask else None - - seq_len = self.max_seq_len - - bsz = feat.size(0) - start_token = torch.full((bsz, ), - self.start_idx, - device=feat.device, - dtype=torch.long) - # bsz - start_token = self.embedding(start_token) - # bsz * emb_dim - start_token = start_token.unsqueeze(1).expand(-1, seq_len, -1) - # bsz * seq_len * emb_dim - out_enc = out_enc.unsqueeze(1) - # bsz * 1 * emb_dim - decoder_input = torch.cat((out_enc, start_token), dim=1) - # bsz * (seq_len + 1) * emb_dim - - outputs = [] - for i in range(1, seq_len + 1): - decoder_output = self._2d_attention( - decoder_input, feat, out_enc, valid_ratios=valid_ratios) - char_output = decoder_output[:, i, :] # bsz * num_classes - char_output = F.softmax(char_output, -1) - outputs.append(char_output) - _, max_idx = torch.max(char_output, dim=1, keepdim=False) - char_embedding = self.embedding(max_idx) # bsz * emb_dim - if i < seq_len: - decoder_input[:, i + 1, :] = char_embedding - - outputs = torch.stack(outputs, 1) # bsz * seq_len * num_classes - - return outputs - - -@DECODERS.register_module() -class SequentialSARDecoder(BaseDecoder): - """Implementation Sequential Decoder module in `SAR. - - `_. - - Args: - num_classes (int): Output class number :math:`C`. - enc_bi_rnn (bool): If True, use bidirectional RNN in encoder. - dec_bi_rnn (bool): If True, use bidirectional RNN in decoder. - dec_do_rnn (float): Dropout of RNN layer in decoder. - dec_gru (bool): If True, use GRU, else LSTM in decoder. - d_k (int): Dim of conv layers in attention module. - d_model (int): Dim of channels from backbone :math:`D_i`. - d_enc (int): Dim of encoder RNN layer :math:`D_m`. - pred_dropout (float): Dropout probability of prediction layer. - max_seq_len (int): Maximum sequence length during decoding. - mask (bool): If True, mask padding in feature map. - start_idx (int): Index of start token. - padding_idx (int): Index of padding token. - pred_concat (bool): If True, concat glimpse feature from - attention with holistic feature and hidden state. - """ - - def __init__(self, - num_classes=37, - enc_bi_rnn=False, - dec_bi_rnn=False, - dec_gru=False, - d_k=64, - d_model=512, - d_enc=512, - pred_dropout=0.0, - mask=True, - max_seq_len=40, - start_idx=0, - padding_idx=92, - pred_concat=False, - init_cfg=None, - **kwargs): - super().__init__(init_cfg=init_cfg) - - self.num_classes = num_classes - self.enc_bi_rnn = enc_bi_rnn - self.d_k = d_k - self.start_idx = start_idx - self.dec_gru = dec_gru - self.max_seq_len = max_seq_len - self.mask = mask - self.pred_concat = pred_concat - - encoder_rnn_out_size = d_enc * (int(enc_bi_rnn) + 1) - decoder_rnn_out_size = encoder_rnn_out_size * (int(dec_bi_rnn) + 1) - # 2D attention layer - self.conv1x1_1 = nn.Conv2d( - decoder_rnn_out_size, d_k, kernel_size=1, stride=1) - self.conv3x3_1 = nn.Conv2d( - d_model, d_k, kernel_size=3, stride=1, padding=1) - self.conv1x1_2 = nn.Conv2d(d_k, 1, kernel_size=1, stride=1) - - # Decoder rnn layer - if dec_gru: - self.rnn_decoder_layer1 = nn.GRUCell(encoder_rnn_out_size, - encoder_rnn_out_size) - self.rnn_decoder_layer2 = nn.GRUCell(encoder_rnn_out_size, - encoder_rnn_out_size) - else: - self.rnn_decoder_layer1 = nn.LSTMCell(encoder_rnn_out_size, - encoder_rnn_out_size) - self.rnn_decoder_layer2 = nn.LSTMCell(encoder_rnn_out_size, - encoder_rnn_out_size) - - # Decoder input embedding - self.embedding = nn.Embedding( - self.num_classes, encoder_rnn_out_size, padding_idx=padding_idx) - - # Prediction layer - self.pred_dropout = nn.Dropout(pred_dropout) - pred_num_class = num_classes - 1 # ignore padding index - if pred_concat: - fc_in_channel = decoder_rnn_out_size + d_model + d_enc - else: - fc_in_channel = d_model - self.prediction = nn.Linear(fc_in_channel, pred_num_class) - - def _2d_attention(self, - y_prev, - feat, - holistic_feat, - hx1, - cx1, - hx2, - cx2, - valid_ratios=None): - _, _, h_feat, w_feat = feat.size() - if self.dec_gru: - hx1 = cx1 = self.rnn_decoder_layer1(y_prev, hx1) - hx2 = cx2 = self.rnn_decoder_layer2(hx1, hx2) - else: - hx1, cx1 = self.rnn_decoder_layer1(y_prev, (hx1, cx1)) - hx2, cx2 = self.rnn_decoder_layer2(hx1, (hx2, cx2)) - - tile_hx2 = hx2.view(hx2.size(0), hx2.size(1), 1, 1) - attn_query = self.conv1x1_1(tile_hx2) # bsz * attn_size * 1 * 1 - attn_query = attn_query.expand(-1, -1, h_feat, w_feat) - attn_key = self.conv3x3_1(feat) - attn_weight = torch.tanh(torch.add(attn_key, attn_query, alpha=1)) - attn_weight = self.conv1x1_2(attn_weight) - bsz, c, h, w = attn_weight.size() - assert c == 1 - - if valid_ratios is not None: - # cal mask of attention weight - attn_mask = torch.zeros_like(attn_weight) - for i, valid_ratio in enumerate(valid_ratios): - valid_width = min(w, math.ceil(w * valid_ratio)) - attn_mask[i, :, :, valid_width:] = 1 - attn_weight = attn_weight.masked_fill(attn_mask.bool(), - float('-inf')) - - attn_weight = F.softmax(attn_weight.view(bsz, -1), dim=-1) - attn_weight = attn_weight.view(bsz, c, h, w) - - attn_feat = torch.sum( - torch.mul(feat, attn_weight), (2, 3), keepdim=False) # n * c - - # linear transformation - if self.pred_concat: - y = self.prediction(torch.cat((hx2, attn_feat, holistic_feat), 1)) - else: - y = self.prediction(attn_feat) - - return y, hx1, hx1, hx2, hx2 - - def forward_train(self, feat, out_enc, targets_dict, img_metas=None): - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - targets_dict (dict): A dict with the key ``padded_targets``, a - tensor of shape :math:`(N, T)`. Each element is the index of a - character. - img_metas (dict): A dict that contains meta information of input - images. Preferably with the key ``valid_ratio``. - - Returns: - Tensor: A raw logit tensor of shape :math:`(N, T, C-1)`. - """ - if img_metas is not None: - assert utils.is_type_list(img_metas, dict) - assert len(img_metas) == feat.size(0) - - valid_ratios = None - if img_metas is not None: - valid_ratios = [ - img_meta.get('valid_ratio', 1.0) for img_meta in img_metas - ] if self.mask else None - - if self.train_mode: - targets = targets_dict['padded_targets'].to(feat.device) - tgt_embedding = self.embedding(targets) - - outputs = [] - start_token = torch.full((feat.size(0), ), - self.start_idx, - device=feat.device, - dtype=torch.long) - start_token = self.embedding(start_token) - for i in range(-1, self.max_seq_len): - if i == -1: - if self.dec_gru: - hx1 = cx1 = self.rnn_decoder_layer1(out_enc) - hx2 = cx2 = self.rnn_decoder_layer2(hx1) - else: - hx1, cx1 = self.rnn_decoder_layer1(out_enc) - hx2, cx2 = self.rnn_decoder_layer2(hx1) - if not self.train_mode: - y_prev = start_token - else: - if self.train_mode: - y_prev = tgt_embedding[:, i, :] - y, hx1, cx1, hx2, cx2 = self._2d_attention( - y_prev, - feat, - out_enc, - hx1, - cx1, - hx2, - cx2, - valid_ratios=valid_ratios) - if self.train_mode: - y = self.pred_dropout(y) - else: - y = F.softmax(y, -1) - _, max_idx = torch.max(y, dim=1, keepdim=False) - char_embedding = self.embedding(max_idx) - y_prev = char_embedding - outputs.append(y) - - outputs = torch.stack(outputs, 1) - - return outputs - - def forward_test(self, feat, out_enc, img_metas): - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - img_metas (dict): A dict that contains meta information of input - images. Preferably with the key ``valid_ratio``. - - Returns: - Tensor: A raw logit tensor of shape :math:`(N, T, C-1)`. - """ - if img_metas is not None: - assert utils.is_type_list(img_metas, dict) - assert len(img_metas) == feat.size(0) - - return self.forward_train(feat, out_enc, None, img_metas) diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/dcn/__init__.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/dcn/__init__.py deleted file mode 100644 index bb5af25d45fd8b80a347566ecef1f9cf77d3da48..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/dcn/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# -# Copied From [mmdetection](https://github.com/open-mmlab/mmdetection/tree/master/mmdet/ops/dcn) -# diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py deleted file mode 100644 index f897e7c55c8b8f0ef7a5db92f29ef1c2415965db..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict(train_cfg=dict(rcnn=dict(sampler=dict(type='OHEMSampler')))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/mytools/create_img_list.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/mytools/create_img_list.py deleted file mode 100644 index f497f1e12822bd36376258b306f159108731104a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tools/mytools/create_img_list.py +++ /dev/null @@ -1,32 +0,0 @@ -# create image list from test/train.json - -import json -import os -import argparse - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument('json', help='input json file path') - parser.add_argument('-o', '--out', default='img_list.list', help='output list file path') - return parser.parse_args() - -def main(): - args = parse_args() - json_path = args.json - json_open = open(json_path, 'r') - json_load = json.load(json_open) - - dir_path =os.path.dirname(json_path) - img_list_name = args.out - - for im in json_load['images']: - img_path = os.path.join(dir_path, im['file_name']) - print(img_path) - - with open(img_list_name, mode='a') as f: - f.writelines(img_path+'\n') - - print('Exprot: {}'.format(img_list_name)) - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/ucalyptus/PTI/utils/__init__.py b/spaces/ucalyptus/PTI/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/uchuukaizoku/CharcaterClassifier1/app.py b/spaces/uchuukaizoku/CharcaterClassifier1/app.py deleted file mode 100644 index a16453fbc81726a6f0958bd628532f8c14f4a490..0000000000000000000000000000000000000000 --- a/spaces/uchuukaizoku/CharcaterClassifier1/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import gradio as gr -import torch -import clip -from PIL import Image, ImageEnhance - -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device) - - -def predict(image): - # labels = "Early American Art,19th and 20th–Century Art,Contemporary Art,Modern Folk,African American Art,Latino Art,Mesoamerican,Egyptian,British Art,Celtic Art,German Art,Medieval European,Gothic,Native American,African Art,Asia pacific Art,Oceanía,Classical,Byzantine,Medieval,Gothic,Renaissance,Baroque,Rococo,Neoclassical,Modernism,Postmodern ,Irish,German,French,Italian,Spanish,Portuguese,Greek,Chinese,Japanese,Korean,Thai,Australian,Middle Eastern,Mesopotamian,Prehistoric,Mexican,Popart,Scottish,Netherlands" - labels = "Japanese, Chinese, Roman, Greek, Etruscan, Scandinavian, Celtic, Medieval, Victorian, Neoclassic, Romanticism, Art Nouveau, Art deco" - labels = labels.split(',') - - converter = ImageEnhance.Color(image) - image = converter.enhance(0.5) - image = image.convert("L") - image = preprocess(image).unsqueeze(0).to(device) - text = clip.tokenize([f"a character of origin: {c}" for c in labels]).to(device) - - with torch.inference_mode(): - logits_per_image, logits_per_text = model(image, text) - probs = logits_per_image.softmax(dim=-1).cpu().numpy() - - return {k: float(v) for k, v in zip(labels, probs[0])} - -# probs = predict(Image.open("../CLIP/CLIP.png"), "cat, dog, ball") -# print(probs) - - -gr.Interface(fn=predict, - inputs=[ - gr.inputs.Image(label="Image to classify.", type="pil")], - theme="gradio/monochrome", - outputs="label", - description="Character Image classification").launch() \ No newline at end of file diff --git a/spaces/unstructuredio/chat-your-data-isw/README.md b/spaces/unstructuredio/chat-your-data-isw/README.md deleted file mode 100644 index a00279ddb030615064bbf18c53ae0ecc209ea718..0000000000000000000000000000000000000000 --- a/spaces/unstructuredio/chat-your-data-isw/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat Your Data ISW -emoji: 🇺🇦 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/3310 Nhm 5 V6.39 37 [PORTABLE].md b/spaces/usbethFlerru/sovits-modelsV2/example/3310 Nhm 5 V6.39 37 [PORTABLE].md deleted file mode 100644 index f140494b440a37d8f2d635c92368d53941b38833..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/3310 Nhm 5 V6.39 37 [PORTABLE].md +++ /dev/null @@ -1,66 +0,0 @@ -

                    3310 Nhm 5 V6.39 37


                    Download Filehttps://urlcod.com/2uyWMY



                    - -Industrial Revolution. - -What's in the box? - -3 - -Worksheets - -Quick Reference - -Guided Reading - -Content: 6 - -Skill: 15 - -Presentation: 9 - -Performance: 8 - -Materials: 5 - -Complexity: 3 - -Process: 10 - -This product is part of the following collection of products from PowerSchool - -Price: $29.99USD - -PowerSchool offers these products in a bundled set for one price. To purchase all products in the set, - -please add all products to your cart, and proceed to checkout. For more information about bundled sets, - -including pricing, and information about other PowerSchool products, please visit our PowerSchool - -Store. - -eBook and Print Bundle - -$19.99USD - -Price: $19.99USD - -Store.Q: - -can I use the decoder reference audio in opensl3? - -I have a decoder reference audio in flac format. I would like to extract a sample of it and use it as my source. Is this possible? If so, how? - -I know that the encoder sends data in reference to the decoder. This implies that the decoder can decode a specific part of the audio. Could I use this as a source? - -A: - -Ok, it was obvious that you use the encoder output reference instead of the decoder output. This means that the decoder will decode to the audio that the encoder sent. - -I'm not sure if it's possible to use the decoder reference output as a source in opensl3, but you can use any audio stream. - -Telomeric repeat sequence-specific binding protein is required for end-protection and chromatin relaxation. - -Telomeres are nucleoprotein structures that protect chromosome ends from degradation and maintain the ability of cells to proliferate for long periods. Although the maintenance of telomere structure is a complex and dynamic process, certain important players involved in this process have been identified. One of these is the telomeric repeat 4fefd39f24
                    -
                    -
                    -

                    diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/6tindr Unofficial Tinder App For WP8 Released.md b/spaces/usbethFlerru/sovits-modelsV2/example/6tindr Unofficial Tinder App For WP8 Released.md deleted file mode 100644 index c5fd0fc9f25d1ce4e6e218da50f194499da66e24..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/6tindr Unofficial Tinder App For WP8 Released.md +++ /dev/null @@ -1,6 +0,0 @@ -

                    6tindr, Unofficial Tinder app for WP8 Released


                    DOWNLOAD ---> https://urlcod.com/2uyVq1



                    -
                    - aaccfb2cb3
                    -
                    -
                    -

                    diff --git a/spaces/user238921933/stable-diffusion-webui/launch.py b/spaces/user238921933/stable-diffusion-webui/launch.py deleted file mode 100644 index c83dd5b72591d422cfc5f64cbe15f19021d8b159..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/launch.py +++ /dev/null @@ -1,361 +0,0 @@ -# this scripts installs necessary requirements and launches main program in webui.py -import subprocess -import os -import sys -import importlib.util -import shlex -import platform -import argparse -import json - -dir_repos = "repositories" -dir_extensions = "extensions" -python = sys.executable -git = os.environ.get('GIT', "git") -index_url = os.environ.get('INDEX_URL', "") -stored_commit_hash = None -skip_install = False - - -def check_python_version(): - is_windows = platform.system() == "Windows" - major = sys.version_info.major - minor = sys.version_info.minor - micro = sys.version_info.micro - - if is_windows: - supported_minors = [10] - else: - supported_minors = [7, 8, 9, 10, 11] - - if not (major == 3 and minor in supported_minors): - import modules.errors - - modules.errors.print_error_explanation(f""" -INCOMPATIBLE PYTHON VERSION - -This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}. -If you encounter an error with "RuntimeError: Couldn't install torch." message, -or any other error regarding unsuccessful package (library) installation, -please downgrade (or upgrade) to the latest version of 3.10 Python -and delete current Python and "venv" folder in WebUI's directory. - -You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/ - -{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases" if is_windows else ""} - -Use --skip-python-version-check to suppress this warning. -""") - - -def commit_hash(): - global stored_commit_hash - - if stored_commit_hash is not None: - return stored_commit_hash - - try: - stored_commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - stored_commit_hash = "" - - return stored_commit_hash - - -def extract_arg(args, name): - return [x for x in args if x != name], name in args - - -def extract_opt(args, name): - opt = None - is_present = False - if name in args: - is_present = True - idx = args.index(name) - del args[idx] - if idx < len(args) and args[idx][0] != "-": - opt = args[idx] - del args[idx] - return args, is_present, opt - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - - if result.returncode != 0: - - message = f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode} -stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} -stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} -""" - raise RuntimeError(message) - - return result.stdout.decode(encoding="utf8", errors="ignore") - - -def check_run(command): - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) - return result.returncode == 0 - - -def is_installed(package): - try: - spec = importlib.util.find_spec(package) - except ModuleNotFoundError: - return False - - return spec is not None - - -def repo_dir(name): - return os.path.join(dir_repos, name) - - -def run_python(code, desc=None, errdesc=None): - return run(f'"{python}" -c "{code}"', desc, errdesc) - - -def run_pip(args, desc=None): - if skip_install: - return - - index_url_line = f' --index-url {index_url}' if index_url != '' else '' - return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}") - - -def check_run_python(code): - return check_run(f'"{python}" -c "{code}"') - - -def git_clone(url, dir, name, commithash=None): - # TODO clone into temporary dir and move if successful - - if os.path.exists(dir): - if commithash is None: - return - - current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip() - if current_hash == commithash: - return - - run(f'"{git}" -C "{dir}" fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}") - run(f'"{git}" -C "{dir}" checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}") - return - - run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}") - - if commithash is not None: - run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}") - - -def version_check(commit): - try: - import requests - commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json() - if commit != "" and commits['commit']['sha'] != commit: - print("--------------------------------------------------------") - print("| You are not up to date with the most recent release. |") - print("| Consider running `git pull` to update. |") - print("--------------------------------------------------------") - elif commits['commit']['sha'] == commit: - print("You are up to date with the most recent release.") - else: - print("Not a git clone, can't perform version check.") - except Exception as e: - print("version check failed", e) - - -def run_extension_installer(extension_dir): - path_installer = os.path.join(extension_dir, "install.py") - if not os.path.isfile(path_installer): - return - - try: - env = os.environ.copy() - env['PYTHONPATH'] = os.path.abspath(".") - - print(run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env)) - except Exception as e: - print(e, file=sys.stderr) - - -def list_extensions(settings_file): - settings = {} - - try: - if os.path.isfile(settings_file): - with open(settings_file, "r", encoding="utf8") as file: - settings = json.load(file) - except Exception as e: - print(e, file=sys.stderr) - - disabled_extensions = set(settings.get('disabled_extensions', [])) - - return [x for x in os.listdir(dir_extensions) if x not in disabled_extensions] - - -def run_extensions_installers(settings_file): - if not os.path.isdir(dir_extensions): - return - - for dirname_extension in list_extensions(settings_file): - run_extension_installer(os.path.join(dir_extensions, dirname_extension)) - - -def prepare_environment(): - global skip_install - - torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117") - requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt") - commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test --use-cpu all --precision full --no-half") - - xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.16rc425') - gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379") - clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1") - openclip_package = os.environ.get('OPENCLIP_PACKAGE', "git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b") - - stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git") - taming_transformers_repo = os.environ.get('TAMING_TRANSFORMERS_REPO', "https://github.com/CompVis/taming-transformers.git") - k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git') - codeformer_repo = os.environ.get('CODEFORMER_REPO', 'https://github.com/sczhou/CodeFormer.git') - blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git') - - stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "47b6b607fdd31875c9279cd2f4f16b92e4ea958e") - taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6") - k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "5b3af030dd83e0297272d861c19477735d0317ec") - codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af") - blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9") - - sys.argv += shlex.split(commandline_args) - - parser = argparse.ArgumentParser(add_help=False) - parser.add_argument("--ui-settings-file", type=str, help="filename to use for ui settings", default='config.json') - args, _ = parser.parse_known_args(sys.argv) - - sys.argv, _ = extract_arg(sys.argv, '-f') - sys.argv, skip_torch_cuda_test = extract_arg(sys.argv, '--skip-torch-cuda-test') - sys.argv, skip_python_version_check = extract_arg(sys.argv, '--skip-python-version-check') - sys.argv, reinstall_xformers = extract_arg(sys.argv, '--reinstall-xformers') - sys.argv, reinstall_torch = extract_arg(sys.argv, '--reinstall-torch') - sys.argv, update_check = extract_arg(sys.argv, '--update-check') - sys.argv, run_tests, test_dir = extract_opt(sys.argv, '--tests') - sys.argv, skip_install = extract_arg(sys.argv, '--skip-install') - xformers = '--xformers' in sys.argv - ngrok = '--ngrok' in sys.argv - - if not skip_python_version_check: - check_python_version() - - commit = commit_hash() - - print(f"Python {sys.version}") - print(f"Commit hash: {commit}") - - if reinstall_torch or not is_installed("torch") or not is_installed("torchvision"): - run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) - - if not skip_torch_cuda_test: - run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") - - if not is_installed("gfpgan"): - run_pip(f"install {gfpgan_package}", "gfpgan") - - if not is_installed("clip"): - run_pip(f"install {clip_package}", "clip") - - if not is_installed("open_clip"): - run_pip(f"install {openclip_package}", "open_clip") - - if (not is_installed("xformers") or reinstall_xformers) and xformers: - if platform.system() == "Windows": - if platform.python_version().startswith("3.10"): - run_pip(f"install -U -I --no-deps {xformers_package}", "xformers") - else: - print("Installation of xformers is not supported in this version of Python.") - print("You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness") - if not is_installed("xformers"): - exit(0) - elif platform.system() == "Linux": - run_pip(f"install {xformers_package}", "xformers") - - if not is_installed("pyngrok") and ngrok: - run_pip("install pyngrok", "ngrok") - - os.makedirs(dir_repos, exist_ok=True) - - git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash) - git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash) - git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash) - git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash) - git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash) - - if not is_installed("lpips"): - run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer") - - run_pip(f"install -r {requirements_file}", "requirements for Web UI") - - run_extensions_installers(settings_file=args.ui_settings_file) - - if update_check: - version_check(commit) - - if "--exit" in sys.argv: - print("Exiting because of --exit argument") - exit(0) - - if run_tests: - exitcode = tests(test_dir) - exit(exitcode) - - -def tests(test_dir): - if "--api" not in sys.argv: - sys.argv.append("--api") - if "--ckpt" not in sys.argv: - sys.argv.append("--ckpt") - sys.argv.append("./test/test_files/empty.pt") - if "--skip-torch-cuda-test" not in sys.argv: - sys.argv.append("--skip-torch-cuda-test") - if "--disable-nan-check" not in sys.argv: - sys.argv.append("--disable-nan-check") - - print(f"Launching Web UI in another process for testing with arguments: {' '.join(sys.argv[1:])}") - - os.environ['COMMANDLINE_ARGS'] = "" - with open('test/stdout.txt', "w", encoding="utf8") as stdout, open('test/stderr.txt', "w", encoding="utf8") as stderr: - proc = subprocess.Popen([sys.executable, *sys.argv], stdout=stdout, stderr=stderr) - - import test.server_poll - exitcode = test.server_poll.run_tests(proc, test_dir) - - print(f"Stopping Web UI process with id {proc.pid}") - proc.kill() - return exitcode - - -def start(): - print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {' '.join(sys.argv[1:])}") - import webui - if '--nowebui' in sys.argv: - webui.api_only() - else: - webui.webui() - - -if __name__ == "__main__": - prepare_environment() - start() diff --git a/spaces/wahaha/u2net_portrait/U-2-Net/model/__init__.py b/spaces/wahaha/u2net_portrait/U-2-Net/model/__init__.py deleted file mode 100644 index 4d8fa272fb03208e17723b0269eb579b81514540..0000000000000000000000000000000000000000 --- a/spaces/wahaha/u2net_portrait/U-2-Net/model/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .u2net import U2NET -from .u2net import U2NETP diff --git a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py b/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py deleted file mode 100644 index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_B_384_22k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/skill_loader.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/skill_loader.py deleted file mode 100644 index 83200bca6fefe528c7e93c18ffb6d5a8da64ac61..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/learn/skill_loader.py +++ /dev/null @@ -1,96 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/18 -@Author : mashenquan -@File : skill_loader.py -@Desc : Skill YAML Configuration Loader. -""" -from pathlib import Path -from typing import Dict, List, Optional - -import yaml -from pydantic import BaseModel, Field - -from metagpt.config import CONFIG - - -class Example(BaseModel): - ask: str - answer: str - - -class Returns(BaseModel): - type: str - format: Optional[str] = None - - -class Prerequisite(BaseModel): - name: str - type: Optional[str] = None - description: Optional[str] = None - default: Optional[str] = None - - -class Skill(BaseModel): - name: str - description: str - id: str - x_prerequisite: Optional[List[Prerequisite]] = Field(default=None, alias="x-prerequisite") - arguments: Dict - examples: List[Example] - returns: Returns - - -class EntitySkills(BaseModel): - skills: List[Skill] - - -class SkillsDeclaration(BaseModel): - entities: Dict[str, EntitySkills] - - -class SkillLoader: - def __init__(self, skill_yaml_file_name: Path = None): - if not skill_yaml_file_name: - skill_yaml_file_name = Path(__file__).parent.parent.parent / ".well-known/skills.yaml" - with open(str(skill_yaml_file_name), "r") as file: - skills = yaml.safe_load(file) - self._skills = SkillsDeclaration(**skills) - - def get_skill_list(self, entity_name: str = "Assistant") -> Dict: - """Return the skill name based on the skill description.""" - entity_skills = self.get_entity(entity_name) - if not entity_skills: - return {} - - agent_skills = CONFIG.agent_skills - if not agent_skills: - return {} - - class AgentSkill(BaseModel): - name: str - - names = [AgentSkill(**i).name for i in agent_skills] - description_to_name_mappings = {} - for s in entity_skills.skills: - if s.name not in names: - continue - description_to_name_mappings[s.description] = s.name - - return description_to_name_mappings - - def get_skill(self, name, entity_name: str = "Assistant") -> Skill: - """Return a skill by name.""" - entity = self.get_entity(entity_name) - if not entity: - return None - for sk in entity.skills: - if sk.name == name: - return sk - - def get_entity(self, name) -> EntitySkills: - """Return a list of skills for the entity.""" - if not self._skills: - return None - return self._skills.entities.get(name) diff --git a/spaces/wffcyrus/MetaGPT-v1/startup.py b/spaces/wffcyrus/MetaGPT-v1/startup.py deleted file mode 100644 index 920d63e367fa88f25d80ee364ed25147abf4317c..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/startup.py +++ /dev/null @@ -1,42 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -import asyncio -import platform -import fire - -from metagpt.roles import Architect, Engineer, ProductManager, ProjectManager, QaEngineer -from metagpt.software_company import SoftwareCompany - - -async def startup(idea: str, investment: float = 3.0, n_round: int = 5, - code_review: bool = True, run_tests: bool = True): - """Run a startup. Be a boss.""" - company = SoftwareCompany() - company.hire([ProductManager(), - Architect(), - ProjectManager(), - Engineer(n_borg=5, use_code_review=code_review)]) - if run_tests: - # developing features: run tests on the spot and identify bugs (bug fixing capability comes soon!) - company.hire([QaEngineer()]) - company.invest(investment) - company.start_project(idea) - await company.run(n_round=n_round) - - -def main(idea: str, investment: float = 10.0, n_round: int = 10, code_review: bool = True, run_tests: bool = True): - """ - We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities. - :param idea: Your innovative idea, such as "Creating a snake game." - :param investment: As an investor, you have the opportunity to contribute a certain dollar amount to this AI company. - :param n_round: - :param code_review: Whether to use code review. - :return: - """ - if platform.system() == "Windows": - asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) - asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) - - -if __name__ == '__main__': - fire.Fire(main) diff --git a/spaces/wong26/faster-whisper-webui/src/whisper/whisperContainer.py b/spaces/wong26/faster-whisper-webui/src/whisper/whisperContainer.py deleted file mode 100644 index 7826d28aa3e6b345febdbd1e6297b4bba9e7fbdc..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/src/whisper/whisperContainer.py +++ /dev/null @@ -1,216 +0,0 @@ -# External programs -import abc -import os -import sys -from typing import List -from urllib.parse import urlparse -import torch -import urllib3 -from src.hooks.progressListener import ProgressListener - -import whisper -from whisper import Whisper - -from src.config import ModelConfig, VadInitialPromptMode -from src.hooks.whisperProgressHook import create_progress_listener_handle - -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy -from src.utils import download_file -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer - -class WhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - # Warning: Using private API here - try: - root_dir = self.download_root - model_config = self._get_model_config() - - if root_dir is None: - root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper") - - if self.model_name in whisper._MODELS: - whisper._download(whisper._MODELS[self.model_name], root_dir, False) - else: - # If the model is not in the official list, see if it needs to be downloaded - model_config.download_url(root_dir) - return True - - except Exception as e: - # Given that the API is private, it could change at any time. We don't want to crash the program - print("Error pre-downloading model: " + str(e)) - return False - - def _get_model_config(self) -> ModelConfig: - """ - Get the model configuration for the model. - """ - for model in self.models: - if model.name == self.model_name: - return model - return None - - def _create_model(self): - print("Loading whisper model " + self.model_name) - model_config = self._get_model_config() - - # Note that the model will not be downloaded in the case of an official Whisper model - model_path = self._get_model_path(model_config, self.download_root) - - return whisper.load_model(model_path, device=self.device, download_root=self.download_root) - - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use. If not specified, the prompt from Whisper will be used. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return WhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions) - - def _get_model_path(self, model_config: ModelConfig, root_dir: str = None): - from src.conversion.hf_converter import convert_hf_whisper - """ - Download the model. - - Parameters - ---------- - model_config: ModelConfig - The model configuration. - """ - # See if path is already set - if model_config.path is not None: - return model_config.path - - if root_dir is None: - root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper") - - model_type = model_config.type.lower() if model_config.type is not None else "whisper" - - if model_type in ["huggingface", "hf"]: - model_config.path = model_config.url - destination_target = os.path.join(root_dir, model_config.name + ".pt") - - # Convert from HuggingFace format to Whisper format - if os.path.exists(destination_target): - print(f"File {destination_target} already exists, skipping conversion") - else: - print("Saving HuggingFace model in Whisper format to " + destination_target) - convert_hf_whisper(model_config.url, destination_target) - - model_config.path = destination_target - - elif model_type in ["whisper", "w"]: - model_config.path = model_config.url - - # See if URL is just a file - if model_config.url in whisper._MODELS: - # No need to download anything - Whisper will handle it - model_config.path = model_config.url - elif model_config.url.startswith("file://"): - # Get file path - model_config.path = urlparse(model_config.url).path - # See if it is an URL - elif model_config.url.startswith("http://") or model_config.url.startswith("https://"): - # Extension (or file name) - extension = os.path.splitext(model_config.url)[-1] - download_target = os.path.join(root_dir, model_config.name + extension) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if not os.path.isfile(download_target): - download_file(model_config.url, download_target) - else: - print(f"File {download_target} already exists, skipping download") - - model_config.path = download_target - # Must be a local file - else: - model_config.path = model_config.url - - else: - raise ValueError(f"Unknown model type {model_type}") - - return model_config.path - -class WhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: WhisperContainer, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict): - self.model_container = model_container - self.language = language - self.task = task - self.prompt_strategy = prompt_strategy - - self.decodeOptions = decodeOptions - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - model = self.model_container.get_model() - - if progress_listener is not None: - with create_progress_listener_handle(progress_listener): - return self._transcribe(model, audio, segment_index, prompt, detected_language) - else: - return self._transcribe(model, audio, segment_index, prompt, detected_language) - - def _transcribe(self, model: Whisper, audio, segment_index: int, prompt: str, detected_language: str): - decodeOptions = self.decodeOptions.copy() - - # Add fp16 - if self.model_container.compute_type in ["fp16", "float16"]: - decodeOptions["fp16"] = True - - initial_prompt = self.prompt_strategy.get_segment_prompt(segment_index, prompt, detected_language) \ - if self.prompt_strategy else prompt - - result = model.transcribe(audio, \ - language=self.language if self.language else detected_language, task=self.task, \ - initial_prompt=initial_prompt, \ - **decodeOptions - ) - - # If we have a prompt strategy, we need to increment the current prompt - if self.prompt_strategy: - self.prompt_strategy.on_segment_finished(segment_index, prompt, detected_language, result) - - return result \ No newline at end of file diff --git a/spaces/wwwwwwww2/bingo/src/lib/utils.ts b/spaces/wwwwwwww2/bingo/src/lib/utils.ts deleted file mode 100644 index 3f98a05136bcbb980b49e21bfc7df1fb0ebf0513..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/lib/utils.ts +++ /dev/null @@ -1,156 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' -import { debug } from './isomorphic' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `104.${random(0, 21)}.${random(0, 127)}.${random(1, 255)}` -} - -export const defaultUID = 'xxx' - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = value ? 86400 * 30 : 0 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function mockUser(cookies: Partial<{ [key: string]: string }>) { - const { - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - _U = defaultUID, - } = cookies - const ua = parseUA(BING_UA) - - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${_U}` || '', - } -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_HEADER = process.env.BING_HEADER, - BING_IP = process.env.BING_IP, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY)) - if (BING_HEADER) { - if ( - (imageOnly && type === 'image') - || !imageOnly - ) { - const headers = extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - headers['x-forward-for'] = BING_IP || DEFAULT_IP - return headers - } - } - return mockUser(cookies) -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/xdecoder/Demo/xdecoder/language/misc.py b/spaces/xdecoder/Demo/xdecoder/language/misc.py deleted file mode 100644 index faf172fbb8a90ed49ca0de9a9ca1d875f2f96215..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/language/misc.py +++ /dev/null @@ -1,64 +0,0 @@ -import random - -import nltk -nltk.data.path.append('/mnt/data/nltk_data') -import numpy as np - -from utils.constants import IMAGENET_DEFAULT_TEMPLATES - - -def get_tag(tokenized, tags): - if not isinstance(tags, (list, tuple)): - tags = [tags] - ret = [] - for (word, pos) in nltk.pos_tag(tokenized): - for tag in tags: - if pos == tag: - ret.append(word) - return ret - -def get_noun_phrase(tokenized): - # Taken from Su Nam Kim Paper... - grammar = r""" - NBAR: - {*} # Nouns and Adjectives, terminated with Nouns - - NP: - {} - {} # Above, connected with in/of/etc... - """ - chunker = nltk.RegexpParser(grammar) - - chunked = chunker.parse(nltk.pos_tag(tokenized)) - continuous_chunk = [] - current_chunk = [] - - for subtree in chunked: - if isinstance(subtree, nltk.Tree): - current_chunk.append(' '.join([token for token, pos in subtree.leaves()])) - elif current_chunk: - named_entity = ' '.join(current_chunk) - if named_entity not in continuous_chunk: - continuous_chunk.append(named_entity) - current_chunk = [] - else: - continue - - return continuous_chunk - -def text_noun_with_prompt_all(text, phrase_prob=0.0, append_text=True): - tokenized = nltk.word_tokenize(text) - - if random.random() >= phrase_prob: - nouns = get_tag(tokenized, ['NN', 'NNS', 'NNP']) - else: - nouns = get_noun_phrase(tokenized) - - - prompt_texts = [np.random.choice(IMAGENET_DEFAULT_TEMPLATES).format(noun) for noun in nouns] - - if append_text: - prompt_texts += [text] - nouns += [text] - - return prompt_texts, nouns \ No newline at end of file diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/architectures/registry.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/architectures/registry.py deleted file mode 100644 index 940e4560f7d052aed4915187410266ab5a4cb4d0..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/architectures/registry.py +++ /dev/null @@ -1,13 +0,0 @@ -_model_entrypoints = {} - -def register_model(fn): - module_name_split = fn.__module__.split('.') - model_name = module_name_split[-1] - _model_entrypoints[model_name] = fn - return fn - -def model_entrypoints(model_name): - return _model_entrypoints[model_name] - -def is_model(model_name): - return model_name in _model_entrypoints \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/DML/dml.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/DML/dml.py deleted file mode 100644 index 546e573d6ba5d3d41bebbb51062bf9ad451d7344..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/projects/DML/dml.py +++ /dev/null @@ -1,149 +0,0 @@ -from __future__ import division, print_function, absolute_import -import torch -from torch.nn import functional as F - -from torchreid.utils import open_all_layers, open_specified_layers -from torchreid.engine import Engine -from torchreid.losses import TripletLoss, CrossEntropyLoss - - -class ImageDMLEngine(Engine): - - def __init__( - self, - datamanager, - model1, - optimizer1, - scheduler1, - model2, - optimizer2, - scheduler2, - margin=0.3, - weight_t=0.5, - weight_x=1., - weight_ml=1., - use_gpu=True, - label_smooth=True, - deploy='model1' - ): - super(ImageDMLEngine, self).__init__(datamanager, use_gpu) - - self.model1 = model1 - self.optimizer1 = optimizer1 - self.scheduler1 = scheduler1 - self.register_model('model1', model1, optimizer1, scheduler1) - - self.model2 = model2 - self.optimizer2 = optimizer2 - self.scheduler2 = scheduler2 - self.register_model('model2', model2, optimizer2, scheduler2) - - self.weight_t = weight_t - self.weight_x = weight_x - self.weight_ml = weight_ml - - assert deploy in ['model1', 'model2', 'both'] - self.deploy = deploy - - self.criterion_t = TripletLoss(margin=margin) - self.criterion_x = CrossEntropyLoss( - num_classes=self.datamanager.num_train_pids, - use_gpu=self.use_gpu, - label_smooth=label_smooth - ) - - def forward_backward(self, data): - imgs, pids = self.parse_data_for_train(data) - - if self.use_gpu: - imgs = imgs.cuda() - pids = pids.cuda() - - outputs1, features1 = self.model1(imgs) - loss1_x = self.compute_loss(self.criterion_x, outputs1, pids) - loss1_t = self.compute_loss(self.criterion_t, features1, pids) - - outputs2, features2 = self.model2(imgs) - loss2_x = self.compute_loss(self.criterion_x, outputs2, pids) - loss2_t = self.compute_loss(self.criterion_t, features2, pids) - - loss1_ml = self.compute_kl_div( - outputs2.detach(), outputs1, is_logit=True - ) - loss2_ml = self.compute_kl_div( - outputs1.detach(), outputs2, is_logit=True - ) - - loss1 = 0 - loss1 += loss1_x * self.weight_x - loss1 += loss1_t * self.weight_t - loss1 += loss1_ml * self.weight_ml - - loss2 = 0 - loss2 += loss2_x * self.weight_x - loss2 += loss2_t * self.weight_t - loss2 += loss2_ml * self.weight_ml - - self.optimizer1.zero_grad() - loss1.backward() - self.optimizer1.step() - - self.optimizer2.zero_grad() - loss2.backward() - self.optimizer2.step() - - loss_dict = { - 'loss1_x': loss1_x.item(), - 'loss1_t': loss1_t.item(), - 'loss1_ml': loss1_ml.item(), - 'loss2_x': loss1_x.item(), - 'loss2_t': loss1_t.item(), - 'loss2_ml': loss1_ml.item() - } - - return loss_dict - - @staticmethod - def compute_kl_div(p, q, is_logit=True): - if is_logit: - p = F.softmax(p, dim=1) - q = F.softmax(q, dim=1) - return -(p * torch.log(q + 1e-8)).sum(1).mean() - - def two_stepped_transfer_learning( - self, epoch, fixbase_epoch, open_layers, model=None - ): - """Two stepped transfer learning. - - The idea is to freeze base layers for a certain number of epochs - and then open all layers for training. - - Reference: https://arxiv.org/abs/1611.05244 - """ - model1 = self.model1 - model2 = self.model2 - - if (epoch + 1) <= fixbase_epoch and open_layers is not None: - print( - '* Only train {} (epoch: {}/{})'.format( - open_layers, epoch + 1, fixbase_epoch - ) - ) - open_specified_layers(model1, open_layers) - open_specified_layers(model2, open_layers) - else: - open_all_layers(model1) - open_all_layers(model2) - - def extract_features(self, input): - if self.deploy == 'model1': - return self.model1(input) - - elif self.deploy == 'model2': - return self.model2(input) - - else: - features = [] - features.append(self.model1(input)) - features.append(self.model2(input)) - return torch.cat(features, 1) diff --git a/spaces/xiaoyeAI/clewd/lib/clewd-superfetch.js b/spaces/xiaoyeAI/clewd/lib/clewd-superfetch.js deleted file mode 100644 index a575da19e5e23df06bee5f3fe7ca38a1853adf09..0000000000000000000000000000000000000000 --- a/spaces/xiaoyeAI/clewd/lib/clewd-superfetch.js +++ /dev/null @@ -1,4 +0,0 @@ -/* -* https://gitgud.io/ahsk/clewd -* https://github.com/h-a-s-k/clewd -*/"use strict";const{spawn:e}=require("node:child_process"),{relative:r,resolve:t,join:s,normalize:n,basename:o}=require("node:path"),{writeFileSync:a,unlinkSync:d,existsSync:i}=require("node:fs"),{ReadableStream:c}=require("node:stream/web"),l=e=>"win32"===process.platform?".\\"+e:e,m=e=>"win32"===process.platform||e.indexOf(" ")>-1?`"${e}"`:e,u={win32:{x64:"clewd-superfetch-win-amd64.exe"},darwin:{x64:"clewd-superfetch-mac-amd64",arm64:"clewd-superfetch-linux-arm64"},linux:{x64:"clewd-superfetch-linux-amd64",arm64:"clewd-superfetch-linux-arm64"},android:{x64:"clewd-superfetch-linux-amd64",arm64:"clewd-superfetch-linux-arm64",arm:"clewd-superfetch-android-arm"}}[process.platform]?.[process.arch],f=""+n(r("./","./bin/"+u)),p=n(t(__dirname,f,"../","../")),h=t(p,f);let b=[123,34,115,101,99,45,99,104,45,117,97,34,58,34,92,34,67,104,114,111,109,105,117,109,92,34,59,118,61,92,34,49,49,48,92,34,44,32,92,34,78,111,116,32,65,40,66,114,97,110,100,92,34,59,118,61,92,34,50,52,92,34,44,32,92,34,71,111,111,103,108,101,32,67,104,114,111,109,101,92,34,59,118,61,92,34,49,49,48,92,34,34,44,34,115,101,99,45,99,104,45,117,97,45,109,111,98,105,108,101,34,58,34,63,48,34,44,34,115,101,99,45,99,104,45,117,97,45,112,108,97,116,102,111,114,109,34,58,34,92,34,87,105,110,100,111,119,115,92,34,34,44,34,85,112,103,114,97,100,101,45,73,110,115,101,99,117,114,101,45,82,101,113,117,101,115,116,115,34,58,34,49,34,44,34,85,115,101,114,45,65,103,101,110,116,34,58,34,77,111,122,105,108,108,97,47,53,46,48,32,40,87,105,110,100,111,119,115,32,78,84,32,49,48,46,48,59,32,87,105,110,54,52,59,32,120,54,52,41,32,65,112,112,108,101,87,101,98,75,105,116,47,53,51,55,46,51,54,32,40,75,72,84,77,76,44,32,108,105,107,101,32,71,101,99,107,111,41,32,67,104,114,111,109,101,47,49,49,48,46,48,46,48,46,48,32,83,97,102,97,114,105,47,53,51,55,46,51,54,34,44,34,65,99,99,101,112,116,34,58,34,116,101,120,116,47,104,116,109,108,44,97,112,112,108,105,99,97,116,105,111,110,47,120,104,116,109,108,43,120,109,108,44,97,112,112,108,105,99,97,116,105,111,110,47,120,109,108,59,113,61,48,46,57,44,105,109,97,103,101,47,97,118,105,102,44,105,109,97,103,101,47,119,101,98,112,44,105,109,97,103,101,47,97,112,110,103,44,42,47,42,59,113,61,48,46,56,44,97,112,112,108,105,99,97,116,105,111,110,47,115,105,103,110,101,100,45,101,120,99,104,97,110,103,101,59,118,61,98,51,59,113,61,48,46,55,34,44,34,83,101,99,45,70,101,116,99,104,45,83,105,116,101,34,58,34,110,111,110,101,34,44,34,83,101,99,45,70,101,116,99,104,45,77,111,100,101,34,58,34,110,97,118,105,103,97,116,101,34,44,34,83,101,99,45,70,101,116,99,104,45,85,115,101,114,34,58,34,63,49,34,44,34,83,101,99,45,70,101,116,99,104,45,68,101,115,116,34,58,34,100,111,99,117,109,101,110,116,34,44,34,65,99,99,101,112,116,45,69,110,99,111,100,105,110,103,34,58,34,103,122,105,112,44,32,100,101,102,108,97,116,101,44,32,98,114,34,44,34,65,99,99,101,112,116,45,76,97,110,103,117,97,103,101,34,58,34,101,110,45,85,83,44,101,110,59,113,61,48,46,57,34,125],w=[91,34,45,45,99,105,112,104,101,114,115,32,84,76,83,95,65,69,83,95,49,50,56,95,71,67,77,95,83,72,65,50,53,54,44,84,76,83,95,65,69,83,95,50,53,54,95,71,67,77,95,83,72,65,51,56,52,44,84,76,83,95,67,72,65,67,72,65,50,48,95,80,79,76,89,49,51,48,53,95,83,72,65,50,53,54,44,69,67,68,72,69,45,69,67,68,83,65,45,65,69,83,49,50,56,45,71,67,77,45,83,72,65,50,53,54,44,69,67,68,72,69,45,82,83,65,45,65,69,83,49,50,56,45,71,67,77,45,83,72,65,50,53,54,44,69,67,68,72,69,45,69,67,68,83,65,45,65,69,83,50,53,54,45,71,67,77,45,83,72,65,51,56,52,44,69,67,68,72,69,45,82,83,65,45,65,69,83,50,53,54,45,71,67,77,45,83,72,65,51,56,52,44,69,67,68,72,69,45,69,67,68,83,65,45,67,72,65,67,72,65,50,48,45,80,79,76,89,49,51,48,53,44,69,67,68,72,69,45,82,83,65,45,67,72,65,67,72,65,50,48,45,80,79,76,89,49,51,48,53,44,69,67,68,72,69,45,82,83,65,45,65,69,83,49,50,56,45,83,72,65,44,69,67,68,72,69,45,82,83,65,45,65,69,83,50,53,54,45,83,72,65,44,65,69,83,49,50,56,45,71,67,77,45,83,72,65,50,53,54,44,65,69,83,50,53,54,45,71,67,77,45,83,72,65,51,56,52,44,65,69,83,49,50,56,45,83,72,65,44,65,69,83,50,53,54,45,83,72,65,34,44,34,34,44,34,45,45,104,116,116,112,50,34,44,34,45,45,104,116,116,112,50,45,110,111,45,115,101,114,118,101,114,45,112,117,115,104,34,44,34,45,45,102,97,108,115,101,45,115,116,97,114,116,34,44,34,45,45,99,111,109,112,114,101,115,115,101,100,34,44,34,45,45,116,108,115,118,49,46,50,34,44,34,45,45,110,111,45,110,112,110,34,44,34,45,45,97,108,112,115,34,44,34,45,45,116,108,115,45,112,101,114,109,117,116,101,45,101,120,116,101,110,115,105,111,110,115,34,44,34,45,45,99,101,114,116,45,99,111,109,112,114,101,115,115,105,111,110,32,98,114,111,116,108,105,34,44,34,45,45,108,111,99,97,116,105,111,110,34,93];const y=(e=false)=>{if(!u||!i(h)){e&&console.warn(`superfetch [err] unavailable for ${process.platform}-${process.arch}, use 3.8.5 for the time being\n`);return false}e&&console.log(`superfetch [found] ${r(__dirname,h)}\n`);return true},x=(t,n)=>{n.headers||(n.headers={});"string"!=typeof n.body&&(n.body=n.body?JSON.stringify(n.body):"");if(!y())return;const o=r("./","bin/cfg"),i=r("./","bin/pyld"),c=r("./","bin/hdr"),u=r("./","bin/ca");let x={...JSON.parse(Buffer.from(b).toString()),...n.headers};const S=Object.values(x);x=Object.keys(x).map(((e,r)=>`${e}: ${S[r]}`));const _=m(l(o)),g=m(l(c)),v=m(l(i)),O=["-v","--cacert",""+m(l(u)),"--config",""+_,"--header","@"+g];if("POST"===n.method){O.push("--data");O.push("@"+v)}const j=[...JSON.parse(Buffer.from(w).toString()),"-X "+(n.method||"GET")];a(s(__dirname,o),j.join("\n"));a(s(__dirname,c),x.join("\n"));n.body&&a(s(__dirname,i),n.body);return new Promise((r=>{const a=e("android"===process.platform?h:f,[...O,""+t],{cwd:p,windowsHide:true,killSignal:"SIGKILL",windowsVerbatimArguments:true,detached:"win32"!==process.platform});a.superfetch=true;a.rape=function(){this.stdout?.end();this.stderr?.end()}.bind(a);a.once("spawn",(()=>{a.stream=n.stream||false;if(a.stream){Object.defineProperty(a,"body",{get:()=>a.stdout});return r(a)}a.body="";a.stdout.on("data",(e=>a.body+=e.toString()));a.json=async()=>JSON.parse(a.body);a.text=async()=>a.body;a.stdout.on("end",(()=>{a.stdout.removeAllListeners();return r(a)}))}));a.once("error",(e=>{console.warn("superfetch [err]",e)}));a.once("close",(()=>{try{d(s(__dirname,o));d(s(__dirname,c));n.body&&d(s(__dirname,i))}catch(e){}a.stdout.removeAllListeners();a.stderr.removeAllListeners();this.body?.removeAllListeners()}));a.stderr.on("data",(e=>{const r=/HTTP\/2 (\d{3})+/g,t=(e=e.toString().trim()).match(r);if(!a.status&&t){const t=r.exec(e);a.status=+t[1]}const s=/(?:< )(.+?)(?:: )(.+)/g,n=e.match(s);if(n){const e={};n.forEach((r=>{const t=r.split(s);e[t?.[1]]=t?.[2]}));a.headers=e}}))}))};module.exports.ClewdSuperfetch=x;module.exports.SuperfetchAvailable=y;module.exports.Binary=f; \ No newline at end of file diff --git a/spaces/xinli80/gradio-image-generator/app.py b/spaces/xinli80/gradio-image-generator/app.py deleted file mode 100644 index ada595a2ab759a162c4a95f19acd44ee575f705c..0000000000000000000000000000000000000000 --- a/spaces/xinli80/gradio-image-generator/app.py +++ /dev/null @@ -1,25 +0,0 @@ -#from IPython.display import Image, display, HTML -#from PIL import Image -from transformers import pipeline -from diffusers import DiffusionPipeline -import gradio as gr - -# Image2Txt and Txt2Image transformers -captioner = pipeline("image-to-text",model="Salesforce/blip-image-captioning-base") -generator = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") -generator.to("cuda") -def caption_image_generator(image): - caption = captioner(image)[0]['generated_text'] - image = generator(caption).images[0] - return [caption, image] - -with gr.Blocks() as demo: - gr.Markdown("# Caption-Generate Art 🖍️") - image_upload = gr.Image(label="Input Image",type="pil") - btn_all = gr.Button("Caption and generate") - caption = gr.Textbox(label="Generated caption") - image_output = gr.Image(label="Generated Image") - btn_all.click(fn=caption_image_generator, inputs=[image_upload], outputs=[caption, image_output]) - -#gr.close_all() -demo.launch() diff --git a/spaces/xuetao/bingo3/src/lib/hooks/use-enter-submit.tsx b/spaces/xuetao/bingo3/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/xxccc/gpt-academic/docs/README.md.Portuguese.md b/spaces/xxccc/gpt-academic/docs/README.md.Portuguese.md deleted file mode 100644 index 816ced1993b05c84ec8a3cd84c42adf1c9757cd2..0000000000000000000000000000000000000000 --- a/spaces/xxccc/gpt-academic/docs/README.md.Portuguese.md +++ /dev/null @@ -1,320 +0,0 @@ -> **Nota** -> -> Ao instalar as dependências, por favor, selecione rigorosamente as versões **especificadas** no arquivo requirements.txt. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` -> - -# Otimização acadêmica GPT (GPT Academic) - -**Se você gostou deste projeto, por favor dê um Star. Se você criou atalhos acadêmicos mais úteis ou plugins funcionais, sinta-se livre para abrir uma issue ou pull request. Nós também temos um README em [Inglês|](README_EN.md)[日本語|](README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](README_RS.md)[Français](README_FR.md) traduzidos por este próprio projeto. -Para traduzir este projeto para qualquer idioma com o GPT, leia e execute [`multi_language.py`](multi_language.py) (experimental). - -> **Nota** -> -> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR! -> -> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A), auto-análises do projeto geradas pelo GPT também estão podem ser chamadas a qualquer momento ao clicar nos plugins relacionados. As perguntas frequentes estão resumidas no [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Instruções de Instalação](#installation). -> -> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor. - -
                    Funcionalidade | Descrição ---- | --- -Um clique de polimento | Suporte a um clique polimento, um clique encontrar erros de gramática no artigo -Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique -Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código -[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados -Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), os plugins suportam[hot-reload](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) o código-fonte do projeto -[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/... -Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo -Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX -Geração em lote de comentários | [Plugin de função] Um clique gera comentários de função em lote -[Tradução chinês-inglês](https://www.bilibili.com/video/BV1yo4y157jV/) markdown | [Plugin de função] Você viu o README em 5 linguagens acima? -Relatório de análise de chat | [Plugin de função] Gera automaticamente um resumo após a execução -[Funcionalidade de tradução de artigos completos em PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin de função] Extrai o título e o resumo do artigo PDF e traduz o artigo completo (multithread) -Assistente arXiv | [Plugin de função] Insira o url do artigo arXiv para traduzir o resumo + baixar PDF -Assistente de integração acadêmica do Google | [Plugin de função] Dê qualquer URL de página de pesquisa acadêmica do Google e deixe o GPT escrever[trabalhos relacionados](https://www.bilibili.com/video/BV1GP411U7Az/) -Agregação de informações da Internet + GPT | [Plugin de função] Um clique para obter informações do GPT através da Internet e depois responde a perguntas para informações nunca ficarem desatualizadas -Exibição de fórmulas/imagem/tabela | Pode exibir simultaneamente a forma de renderização e[TEX] das fórmulas, suporte a fórmulas e realce de código -Suporte de plugins de várias linhas | Suporte a várias chamadas em linha do chatgpt, um clique para processamento[de massa de texto](https://www.bilibili.com/video/BV1FT411H7c5/) ou programa -Tema gradio escuro | Adicione ``` /?__theme=dark``` ao final da url do navegador para ativar o tema escuro -[Suporte para vários modelos LLM](https://www.bilibili.com/video/BV1wT411p7yf), suporte para a nova interface API2D | A sensação de ser atendido simultaneamente por GPT3.5, GPT4, [Chatglm THU](https://github.com/THUDM/ChatGLM-6B), [Moss Fudan](https://github.com/OpenLMLab/MOSS) deve ser ótima, certo? -Mais modelos LLM incorporados, suporte para a implantação[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Adicione interface Newbing (New Bing), suporte [JittorLLMs](https://github.com/Jittor/JittorLLMs) THU Introdução ao suporte do LLaMA, RWKV e Pan Gu Alpha -Mais recursos novos mostrados (geração de imagens, etc.) ... | Consulte o final deste documento ... - -
                    - -- Nova interface (Modifique a opção LAYOUT em `config.py` para alternar entre o layout esquerdo/direito e o layout superior/inferior) -
                    - -
                    - All buttons are dynamically generated by reading functional.py, and you can add custom functions at will, liberating the clipboard - -
                    - -
                    - -- Proofreading/errors correction - - -
                    - -
                    - -- If the output contains formulas, it will be displayed in both tex and rendering format at the same time, which is convenient for copying and reading - - -
                    - -
                    - -- Don't want to read the project code? Just show the whole project to chatgpt - - -
                    - -
                    - -- Mix the use of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) - - -
                    - -
                    - ---- -# Instalação -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project - -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure the API KEY - -In `config.py`, configure API KEY and other settings, [Special Network Environment Settings] (https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py`, and use the configuration in it to cover the configuration with the same name in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`. The writing format of environment variables is referenced to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` > `config.py`) - - -3. Install dependencies - -```sh -# (Option I: for those familiar with python)(python version is 3.9 or above, the newer the better), note: use the official pip source or the Alibaba pip source. Temporary solution for changing source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: for those who are unfamiliar with python) use anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create anaconda environment -conda activate gptac_venv # activate anaconda environment -python -m pip install -r requirements.txt # This step is the same as the pip installation step -``` - -
                    If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, click to expand here -

                    - -[Optional Step] If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, you need to install more dependencies (prerequisite: familiar with Python + used Pytorch + computer configuration is strong): -```sh -# 【Optional Step I】support Tsinghua ChatGLM。Tsinghua ChatGLM Note: If you encounter a "Call ChatGLM fails cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installed is torch+cpu version, and using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# 【Optional Step II】support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When executing this line of code, you must be in the project root path - -# 【Optional Step III】Make sure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports docker solutions): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

                    -
                    - - -4. Run - -```sh -python main.py -```5. Plugin de Função de Teste -``` -- Função de modelo de plug-in de teste (exige que o GPT responda ao que aconteceu hoje na história), você pode usar esta função como modelo para implementar funções mais complexas - Clique em "[Função de plug-in de modelo de demonstração] O que aconteceu hoje na história?" -``` - -## Instalação - Método 2: Usando o Docker - -1. Apenas ChatGPT (recomendado para a maioria das pessoas) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Baixar o projeto -cd chatgpt_academic # Entrar no caminho -nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc. -docker build -t gpt-academic . # Instale - -# (Ùltima etapa - escolha 1) Dentro do ambiente Linux, é mais fácil e rápido usar `--net=host` -docker run --rm -it --net=host gpt-academic -# (Última etapa - escolha 2) Em ambientes macOS/windows, você só pode usar a opção -p para expor a porta do contêiner (por exemplo, 50923) para a porta no host -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (conhecimento de Docker necessário) - -``` sh -# Edite o arquivo docker-compose.yml, remova as soluções 1 e 3, mantenha a solução 2, e siga as instruções nos comentários do arquivo -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (conhecimento de Docker necessário) -``` sh -# Edite o arquivo docker-compose.yml, remova as soluções 1 e 2, mantenha a solução 3, e siga as instruções nos comentários do arquivo -docker-compose up -``` - - -## Instalação - Método 3: Outros Métodos de Implantação - -1. Como usar URLs de proxy inverso/microsoft Azure API -Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`. - -2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem) -Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Usando a WSL2 (sub-sistema do Windows para Linux) -Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. Como executar em um subdiretório (ex. `http://localhost/subpath`) -Acesse [Instruções de execução FastAPI](docs/WithFastapi.md) - -5. Execute usando o docker-compose -Leia o arquivo docker-compose.yml e siga as instruções. - -# Uso Avançado -## Customize novos botões de acesso rápido / plug-ins de função personalizados - -1. Personalizar novos botões de acesso rápido (atalhos acadêmicos) -Abra `core_functional.py` em qualquer editor de texto e adicione os seguintes itens e reinicie o programa (Se o botão já foi adicionado e pode ser visto, prefixos e sufixos são compatíveis com modificações em tempo real e não exigem reinício do programa para ter efeito.) -Por exemplo, -``` -"Super Eng:": { -  # Prefixo, será adicionado antes da sua entrada. Por exemplo, para descrever sua solicitação, como tradução, explicação de código, polimento, etc. -  "Prefix": "Por favor, traduza o seguinte conteúdo para chinês e use uma tabela em Markdown para explicar termos próprios no texto: \n \n", - -  # Sufixo, será adicionado após a sua entrada. Por exemplo, emparelhado com o prefixo, pode colocar sua entrada entre aspas. -  "Suffix": "", -}, -``` -
                    - -
                    - -2. Personalizar plug-ins de função - -Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível. -A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos. -Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Última atualização -## Novas funções dinâmicas.1. Função de salvamento de diálogo. Ao chamar o plug-in de função "Salvar diálogo atual", é possível salvar o diálogo atual em um arquivo html legível e reversível. Além disso, ao chamar o plug-in de função "Carregar arquivo de histórico de diálogo" no menu suspenso da área de plug-in, é possível restaurar uma conversa anterior. Dica: clicar em "Carregar arquivo de histórico de diálogo" sem especificar um arquivo permite visualizar o cache do arquivo html de histórico. Clicar em "Excluir todo o registro de histórico de diálogo local" permite excluir todo o cache de arquivo html. -
                    - -
                    - - -2. Geração de relatório. A maioria dos plug-ins gera um relatório de trabalho após a conclusão da execução. -
                    - - - -
                    - -3. Design modular de funcionalidades, com interfaces simples, mas suporte a recursos poderosos -
                    - - -
                    - -4. Este é um projeto de código aberto que é capaz de "auto-traduzir-se". -
                    - -
                    - -5. A tradução de outros projetos de código aberto é simples. -
                    - -
                    - -
                    - -
                    - -6. Recursos decorativos para o [live2d](https://github.com/fghrsh/live2d_demo) (desativados por padrão, é necessário modificar o arquivo `config.py`) -
                    - -
                    - -7. Suporte ao modelo de linguagem MOSS -
                    - -
                    - -8. Geração de imagens pelo OpenAI -
                    - -
                    - -9. Análise e resumo de áudio pelo OpenAI -
                    - -
                    - -10. Revisão e correção de erros de texto em Latex. -
                    - -
                    - -## Versão: -- Versão 3.5(Todo): Usar linguagem natural para chamar todas as funções do projeto (prioridade alta) -- Versão 3.4(Todo): Melhorar o suporte à multithread para o chatglm local -- Versão 3.3: +Funções integradas de internet -- Versão 3.2: Suporte a mais interfaces de parâmetros de plug-in (função de salvar diálogo, interpretação de códigos de várias linguagens, perguntas de combinações LLM arbitrárias ao mesmo tempo) -- Versão 3.1: Suporte a perguntas a vários modelos de gpt simultaneamente! Suporte para api2d e balanceamento de carga para várias chaves api -- Versão 3.0: Suporte ao chatglm e outros LLMs de pequeno porte -- Versão 2.6: Refatoração da estrutura de plug-in, melhoria da interatividade e adição de mais plug-ins -- Versão 2.5: Autoatualização, resolvendo problemas de token de texto excessivamente longo e estouro ao compilar grandes projetos -- Versão 2.4: (1) Adição de funcionalidade de tradução de texto completo em PDF; (2) Adição de funcionalidade de mudança de posição da área de entrada; (3) Adição de opção de layout vertical; (4) Otimização de plug-ins de multithread. -- Versão 2.3: Melhoria da interatividade de multithread -- Versão 2.2: Suporte à recarga a quente de plug-ins -- Versão 2.1: Layout dobrável -- Versão 2.0: Introdução de plug-ins de função modular -- Versão 1.0: Funcionalidades básicasgpt_academic desenvolvedores QQ grupo-2: 610599535 - -- Problemas conhecidos - - Extensões de tradução de alguns navegadores podem interferir na execução do front-end deste software - - Uma versão muito alta ou muito baixa do Gradio pode causar vários erros - -## Referências e Aprendizado - -``` -Foi feita referência a muitos projetos excelentes em código, principalmente: - -# Projeto1: ChatGLM-6B da Tsinghua: -https://github.com/THUDM/ChatGLM-6B - -# Projeto2: JittorLLMs da Tsinghua: -https://github.com/Jittor/JittorLLMs - -# Projeto3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Projeto4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projeto5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Mais: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/__init__.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/__init__.py deleted file mode 100644 index db8124b132f91216c0ded226f20ea3a046734728..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -# empty diff --git a/spaces/yeashwant/chatgpt-prompt-generator-v12/app.py b/spaces/yeashwant/chatgpt-prompt-generator-v12/app.py deleted file mode 100644 index 5cc2368fc534285ab74935d162b673b9250c1eac..0000000000000000000000000000000000000000 --- a/spaces/yeashwant/chatgpt-prompt-generator-v12/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompt-generator-v12") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompt-generator-v12", from_tf=True) - -def generate(prompt): - - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻‍🚀🧑🏻‍🎨🧑🏻‍🔬🧑🏻‍💻🧑🏼‍🏫🧑🏽‍🌾" -gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator v12 👨🏻‍🎤", description=description).launch() diff --git a/spaces/yeqingmei123/face-test/e4e/criteria/w_norm.py b/spaces/yeqingmei123/face-test/e4e/criteria/w_norm.py deleted file mode 100644 index a45ab6f67d8a3f7051be4b7236fa2f38446fd2c1..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/e4e/criteria/w_norm.py +++ /dev/null @@ -1,14 +0,0 @@ -import torch -from torch import nn - - -class WNormLoss(nn.Module): - - def __init__(self, start_from_latent_avg=True): - super(WNormLoss, self).__init__() - self.start_from_latent_avg = start_from_latent_avg - - def forward(self, latent, latent_avg=None): - if self.start_from_latent_avg: - latent = latent - latent_avg - return torch.sum(latent.norm(2, dim=(1, 2))) / latent.shape[0] diff --git a/spaces/yeqingmei123/face-test/op/__init__.py b/spaces/yeqingmei123/face-test/op/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/CODE_OF_CONDUCT.md b/spaces/yfyangd/PictureBookUnderstanding/BLIP/CODE_OF_CONDUCT.md deleted file mode 100644 index b6724718c9512d730bb7f1bcc5848cd420241407..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,105 +0,0 @@ -# Salesforce Open Source Community Code of Conduct - -## About the Code of Conduct - -Equality is a core value at Salesforce. We believe a diverse and inclusive -community fosters innovation and creativity, and are committed to building a -culture where everyone feels included. - -Salesforce open-source projects are committed to providing a friendly, safe, and -welcoming environment for all, regardless of gender identity and expression, -sexual orientation, disability, physical appearance, body size, ethnicity, nationality, -race, age, religion, level of experience, education, socioeconomic status, or -other similar personal characteristics. - -The goal of this code of conduct is to specify a baseline standard of behavior so -that people with different social values and communication styles can work -together effectively, productively, and respectfully in our open source community. -It also establishes a mechanism for reporting issues and resolving conflicts. - -All questions and reports of abusive, harassing, or otherwise unacceptable behavior -in a Salesforce open-source project may be reported by contacting the Salesforce -Open Source Conduct Committee at ossconduct@salesforce.com. - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to making participation in our project and -our community a harassment-free experience for everyone, regardless of gender -identity and expression, sexual orientation, disability, physical appearance, -body size, ethnicity, nationality, race, age, religion, level of experience, education, -socioeconomic status, or other similar personal characteristics. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy toward other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or -advances -* Personal attacks, insulting/derogatory comments, or trolling -* Public or private harassment -* Publishing, or threatening to publish, others' private information—such as -a physical or electronic address—without explicit permission -* Other conduct which could reasonably be considered inappropriate in a -professional setting -* Advocating for or encouraging any of the above behaviors - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned with this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies both within project spaces and in public spaces -when an individual is representing the project or its community. Examples of -representing a project or community include using an official project email -address, posting via an official social media account, or acting as an appointed -representative at an online or offline event. Representation of a project may be -further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the Salesforce Open Source Conduct Committee -at ossconduct@salesforce.com. All complaints will be reviewed and investigated -and will result in a response that is deemed necessary and appropriate to the -circumstances. The committee is obligated to maintain confidentiality with -regard to the reporter of an incident. Further details of specific enforcement -policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership and the Salesforce Open Source Conduct -Committee. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][contributor-covenant-home], -version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html. -It includes adaptions and additions from [Go Community Code of Conduct][golang-coc], -[CNCF Code of Conduct][cncf-coc], and [Microsoft Open Source Code of Conduct][microsoft-coc]. - -This Code of Conduct is licensed under the [Creative Commons Attribution 3.0 License][cc-by-3-us]. - -[contributor-covenant-home]: https://www.contributor-covenant.org (https://www.contributor-covenant.org/) -[golang-coc]: https://golang.org/conduct -[cncf-coc]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md -[microsoft-coc]: https://opensource.microsoft.com/codeofconduct/ -[cc-by-3-us]: https://creativecommons.org/licenses/by/3.0/us/ diff --git a/spaces/ylacombe/accessible-mistral/conversion_iso639.py b/spaces/ylacombe/accessible-mistral/conversion_iso639.py deleted file mode 100644 index 1c3e66e40f1064f9ab0eb0a32f5ea8e93e7def89..0000000000000000000000000000000000000000 --- a/spaces/ylacombe/accessible-mistral/conversion_iso639.py +++ /dev/null @@ -1,810 +0,0 @@ -# Language dict -language_code_to_name = { - "afr": "Afrikaans", - "amh": "Amharic", - "arb": "Modern Standard Arabic", - "ary": "Moroccan Arabic", - "arz": "Egyptian Arabic", - "asm": "Assamese", - "ast": "Asturian", - "azj": "North Azerbaijani", - "bel": "Belarusian", - "ben": "Bengali", - "bos": "Bosnian", - "bul": "Bulgarian", - "cat": "Catalan", - "ceb": "Cebuano", - "ces": "Czech", - "ckb": "Central Kurdish", - "cmn": "Mandarin Chinese", - "cym": "Welsh", - "dan": "Danish", - "deu": "German", - "ell": "Greek", - "eng": "English", - "est": "Estonian", - "eus": "Basque", - "fin": "Finnish", - "fra": "French", - "gaz": "West Central Oromo", - "gle": "Irish", - "glg": "Galician", - "guj": "Gujarati", - "heb": "Hebrew", - "hin": "Hindi", - "hrv": "Croatian", - "hun": "Hungarian", - "hye": "Armenian", - "ibo": "Igbo", - "ind": "Indonesian", - "isl": "Icelandic", - "ita": "Italian", - "jav": "Javanese", - "jpn": "Japanese", - "kam": "Kamba", - "kan": "Kannada", - "kat": "Georgian", - "kaz": "Kazakh", - "kea": "Kabuverdianu", - "khk": "Halh Mongolian", - "khm": "Khmer", - "kir": "Kyrgyz", - "kor": "Korean", - "lao": "Lao", - "lit": "Lithuanian", - "ltz": "Luxembourgish", - "lug": "Ganda", - "luo": "Luo", - "lvs": "Standard Latvian", - "mai": "Maithili", - "mal": "Malayalam", - "mar": "Marathi", - "mkd": "Macedonian", - "mlt": "Maltese", - "mni": "Meitei", - "mya": "Burmese", - "nld": "Dutch", - "nno": "Norwegian Nynorsk", - "nob": "Norwegian Bokm\u00e5l", - "npi": "Nepali", - "nya": "Nyanja", - "oci": "Occitan", - "ory": "Odia", - "pan": "Punjabi", - "pbt": "Southern Pashto", - "pes": "Western Persian", - "pol": "Polish", - "por": "Portuguese", - "ron": "Romanian", - "rus": "Russian", - "slk": "Slovak", - "slv": "Slovenian", - "sna": "Shona", - "snd": "Sindhi", - "som": "Somali", - "spa": "Spanish", - "srp": "Serbian", - "swe": "Swedish", - "swh": "Swahili", - "tam": "Tamil", - "tel": "Telugu", - "tgk": "Tajik", - "tgl": "Tagalog", - "tha": "Thai", - "tur": "Turkish", - "ukr": "Ukrainian", - "urd": "Urdu", - "uzn": "Northern Uzbek", - "vie": "Vietnamese", - "xho": "Xhosa", - "yor": "Yoruba", - "yue": "Cantonese", - "zlm": "Colloquial Malay", - "zsm": "Standard Malay", - "zul": "Zulu", -} -LANGUAGE_NAME_TO_CODE = {v: k for k, v in language_code_to_name.items()} - -ISO_639_1_TO_3 = { - 'aa': 'aar', - 'ab': 'abk', - 'ae': 'ave', - 'af': 'afr', - 'ak': 'aka', - 'am': 'amh', - 'an': 'arg', - 'ar': 'ara', - 'as': 'asm', - 'av': 'ava', - 'ay': 'aym', - 'az': 'aze', - 'ba': 'bak', - 'be': 'bel', - 'bg': 'bul', - 'bi': 'bis', - 'bm': 'bam', - 'bn': 'ben', - 'bo': 'bod', - 'br': 'bre', - 'bs': 'bos', - 'ca': 'cat', - 'ce': 'che', - 'ch': 'cha', - 'co': 'cos', - 'cr': 'cre', - 'cs': 'ces', - 'cu': 'chu', - 'cv': 'chv', - 'cy': 'cym', - 'da': 'dan', - 'de': 'deu', - 'dv': 'div', - 'dz': 'dzo', - 'ee': 'ewe', - 'el': 'ell', - 'en': 'eng', - 'eo': 'epo', - 'es': 'spa', - 'et': 'est', - 'eu': 'eus', - 'fa': 'fas', - 'ff': 'ful', - 'fi': 'fin', - 'fj': 'fij', - 'fo': 'fao', - 'fr': 'fra', - 'fy': 'fry', - 'ga': 'gle', - 'gd': 'gla', - 'gl': 'glg', - 'gn': 'grn', - 'gu': 'guj', - 'gv': 'glv', - 'ha': 'hau', - 'he': 'heb', - 'hi': 'hin', - 'ho': 'hmo', - 'hr': 'hrv', - 'ht': 'hat', - 'hu': 'hun', - 'hy': 'hye', - 'hz': 'her', - 'ia': 'ina', - 'id': 'ind', - 'ie': 'ile', - 'ig': 'ibo', - 'ii': 'iii', - 'ik': 'ipk', - 'io': 'ido', - 'is': 'isl', - 'it': 'ita', - 'iu': 'iku', - 'ja': 'jpn', - 'jv': 'jav', - 'ka': 'kat', - 'kg': 'kon', - 'ki': 'kik', - 'kj': 'kua', - 'kk': 'kaz', - 'kl': 'kal', - 'km': 'khm', - 'kn': 'kan', - 'ko': 'kor', - 'kr': 'kau', - 'ks': 'kas', - 'ku': 'kur', - 'kv': 'kom', - 'kw': 'cor', - 'ky': 'kir', - 'la': 'lat', - 'lb': 'ltz', - 'lg': 'lug', - 'li': 'lim', - 'ln': 'lin', - 'lo': 'lao', - 'lt': 'lit', - 'lu': 'lub', - 'lv': 'lav', - 'mg': 'mlg', - 'mh': 'mah', - 'mi': 'mri', - 'mk': 'mkd', - 'ml': 'mal', - 'mn': 'mon', - 'mr': 'mar', - 'ms': 'msa', - 'mt': 'mlt', - 'my': 'mya', - 'na': 'nau', - 'nb': 'nob', - 'nd': 'nde', - 'ne': 'nep', - 'ng': 'ndo', - 'nl': 'nld', - 'nn': 'nno', - 'no': 'nor', - 'nr': 'nbl', - 'nv': 'nav', - 'ny': 'nya', - 'oc': 'oci', - 'oj': 'oji', - 'om': 'orm', - 'or': 'ori', - 'os': 'oss', - 'pa': 'pan', - 'pi': 'pli', - 'pl': 'pol', - 'ps': 'pus', - 'pt': 'por', - 'qu': 'que', - 'rm': 'roh', - 'rn': 'run', - 'ro': 'ron', - 'ru': 'rus', - 'rw': 'kin', - 'sa': 'san', - 'sc': 'srd', - 'sd': 'snd', - 'se': 'sme', - 'sg': 'sag', - 'sh': 'hbs', - 'si': 'sin', - 'sk': 'slk', - 'sl': 'slv', - 'sm': 'smo', - 'sn': 'sna', - 'so': 'som', - 'sq': 'sqi', - 'sr': 'srp', - 'ss': 'ssw', - 'st': 'sot', - 'su': 'sun', - 'sv': 'swe', - 'sw': 'swa', - 'ta': 'tam', - 'te': 'tel', - 'tg': 'tgk', - 'th': 'tha', - 'ti': 'tir', - 'tk': 'tuk', - 'tl': 'tgl', - 'tn': 'tsn', - 'to': 'ton', - 'tr': 'tur', - 'ts': 'tso', - 'tt': 'tat', - 'tw': 'twi', - 'ty': 'tah', - 'ug': 'uig', - 'uk': 'ukr', - 'ur': 'urd', - 'uz': 'uzb', - 've': 'ven', - 'vi': 'vie', - 'vo': 'vol', - 'wa': 'wln', - 'wo': 'wol', - 'xh': 'xho', - 'yi': 'yid', - 'yo': 'yor', - 'za': 'zha', - 'zh': 'zho', - 'zu': 'zul'} - -iso639_3_to_1 = { - "aae": "sq", - "aao": "ar", - "aar": "aa", - "aat": "sq", - "abh": "ar", - "abk": "ab", - "abv": "ar", - "acm": "ar", - "acq": "ar", - "acw": "ar", - "acx": "ar", - "acy": "ar", - "adf": "ar", - "aeb": "ar", - "aec": "ar", - "afb": "ar", - "afr": "af", - "ajp": "ar", - "aka": "ak", - "aln": "sq", - "als": "sq", - "amh": "am", - "apc": "ar", - "apd": "ar", - "ara": "ar", - "arb": "ar", - "arg": "an", - "arq": "ar", - "ars": "ar", - "ary": "ar", - "arz": "ar", - "asm": "as", - "auz": "ar", - "ava": "av", - "ave": "ae", - "avl": "ar", - "ayc": "ar", - "ayh": "ar", - "ayl": "ar", - "aym": "ay", - "ayn": "ar", - "ayp": "ar", - "ayr": "ay", - "azb": "az", - "aze": "az", - "azj": "az", - "bak": "ba", - "bam": "bm", - "bbz": "ar", - "bel": "be", - "ben": "bn", - "bhr": "mg", - "bis": "bi", - "bjn": "ms", - "bmm": "mg", - "bod": "bo", - "bos": "sh", - "bre": "br", - "btj": "ms", - "bul": "bg", - "bve": "ms", - "bvu": "ms", - "bzc": "mg", - "cat": "ca", - "cdo": "zh", - "ces": "cs", - "cha": "ch", - "che": "ce", - "chu": "cu", - "chv": "cv", - "cjy": "zh", - "ckb": "ku", - "cmn": "zh", - "coa": "ms", - "cor": "kw", - "cos": "co", - "cpx": "zh", - "cre": "cr", - "crj": "cr", - "crk": "cr", - "crl": "cr", - "crm": "cr", - "csw": "cr", - "cwd": "cr", - "cym": "cy", - "czh": "zh", - "czo": "zh", - "dan": "da", - "deu": "de", - "div": "dv", - "dty": "ne", - "dup": "ms", - "dzo": "dz", - "ekk": "et", - "ell": "el", - "eng": "en", - "epo": "eo", - "esi": "ik", - "esk": "ik", - "est": "et", - "eus": "eu", - "ewe": "ee", - "fao": "fo", - "fas": "fa", - "fat": "ak", - "ffm": "ff", - "fij": "fj", - "fin": "fi", - "fra": "fr", - "fry": "fy", - "fub": "ff", - "fuc": "ff", - "fue": "ff", - "fuf": "ff", - "fuh": "ff", - "fui": "ff", - "ful": "ff", - "fuq": "ff", - "fuv": "ff", - "gan": "zh", - "gax": "om", - "gaz": "om", - "gla": "gd", - "gle": "ga", - "glg": "gl", - "glv": "gv", - "gnw": "gn", - "grn": "gn", - "gug": "gn", - "gui": "gn", - "guj": "gu", - "gun": "gn", - "hae": "om", - "hak": "zh", - "hat": "ht", - "hau": "ha", - "hbs": "sh", - "heb": "he", - "her": "hz", - "hin": "hi", - "hji": "ms", - "hmo": "ho", - "hrv": "hr", - "hsn": "zh", - "hun": "hu", - "hye": "hy", - "ibo": "ig", - "ido": "io", - "iii": "ii", - "ike": "iu", - "ikt": "iu", - "iku": "iu", - "ile": "ie", - "ina": "ia", - "ind": "ms", - "ipk": "ik", - "isl": "is", - "ita": "it", - "jak": "ms", - "jav": "jv", - "jax": "ms", - "jpn": "ja", - "kal": "kl", - "kan": "kn", - "kas": "ks", - "kat": "ka", - "kau": "kr", - "kaz": "kk", - "kby": "kr", - "khk": "mn", - "khm": "km", - "kik": "ki", - "kin": "rw", - "kir": "ky", - "kmr": "ku", - "knc": "kr", - "kng": "kg", - "koi": "kv", - "kom": "kv", - "kon": "kg", - "kor": "ko", - "kpv": "kv", - "krt": "kr", - "kua": "kj", - "kur": "ku", - "kvb": "ms", - "kvr": "ms", - "kwy": "kg", - "kxd": "ms", - "lao": "lo", - "lat": "la", - "lav": "lv", - "lce": "ms", - "lcf": "ms", - "ldi": "kg", - "lim": "li", - "lin": "ln", - "lit": "lt", - "liw": "ms", - "ltg": "lv", - "ltz": "lb", - "lub": "lu", - "lug": "lg", - "lvs": "lv", - "lzh": "zh", - "mah": "mh", - "mal": "ml", - "mar": "mr", - "max": "ms", - "meo": "ms", - "mfa": "ms", - "mfb": "ms", - "min": "ms", - "mkd": "mk", - "mlg": "mg", - "mlt": "mt", - "mnp": "zh", - "mon": "mn", - "mqg": "ms", - "mri": "mi", - "msa": "ms", - "msh": "mg", - "msi": "ms", - "mui": "ms", - "mvf": "mn", - "mya": "my", - "nan": "zh", - "nau": "na", - "nav": "nv", - "nbl": "nr", - "nde": "nd", - "ndo": "ng", - "nep": "ne", - "nhd": "gn", - "nld": "nl", - "nno": "no", - "nob": "no", - "nor": "no", - "npi": "ne", - "nya": "ny", - "oci": "oc", - "ojb": "oj", - "ojc": "oj", - "ojg": "oj", - "oji": "oj", - "ojs": "oj", - "ojw": "oj", - "orc": "om", - "ori": "or", - "orm": "om", - "orn": "ms", - "ors": "ms", - "ory": "or", - "oss": "os", - "otw": "oj", - "pan": "pa", - "pbt": "ps", - "pbu": "ps", - "pel": "ms", - "pes": "fa", - "pga": "ar", - "pli": "pi", - "plt": "mg", - "pol": "pl", - "por": "pt", - "prs": "fa", - "pse": "ms", - "pst": "ps", - "pus": "ps", - "qub": "qu", - "qud": "qu", - "que": "qu", - "quf": "qu", - "qug": "qu", - "quh": "qu", - "quk": "qu", - "qul": "qu", - "qup": "qu", - "qur": "qu", - "qus": "qu", - "quw": "qu", - "qux": "qu", - "quy": "qu", - "quz": "qu", - "qva": "qu", - "qvc": "qu", - "qve": "qu", - "qvh": "qu", - "qvi": "qu", - "qvj": "qu", - "qvl": "qu", - "qvm": "qu", - "qvn": "qu", - "qvo": "qu", - "qvp": "qu", - "qvs": "qu", - "qvw": "qu", - "qvz": "qu", - "qwa": "qu", - "qwc": "qu", - "qwh": "qu", - "qws": "qu", - "qxa": "qu", - "qxc": "qu", - "qxh": "qu", - "qxl": "qu", - "qxn": "qu", - "qxo": "qu", - "qxp": "qu", - "qxr": "qu", - "qxt": "qu", - "qxu": "qu", - "qxw": "qu", - "roh": "rm", - "ron": "ro", - "run": "rn", - "rus": "ru", - "sag": "sg", - "san": "sa", - "sdc": "sc", - "sdh": "ku", - "sdn": "sc", - "shu": "ar", - "sin": "si", - "skg": "mg", - "slk": "sk", - "slv": "sl", - "sme": "se", - "smo": "sm", - "sna": "sn", - "snd": "sd", - "som": "so", - "sot": "st", - "spa": "es", - "spv": "or", - "sqi": "sq", - "src": "sc", - "srd": "sc", - "sro": "sc", - "srp": "sh", - "ssh": "ar", - "ssw": "ss", - "sun": "su", - "swa": "sw", - "swc": "sw", - "swe": "sv", - "swh": "sw", - "tah": "ty", - "tam": "ta", - "tat": "tt", - "tdx": "mg", - "tel": "te", - "tgk": "tg", - "tgl": "tl", - "tha": "th", - "tir": "ti", - "tkg": "mg", - "tmw": "ms", - "ton": "to", - "tsn": "tn", - "tso": "ts", - "tuk": "tk", - "tur": "tr", - "twi": "ak", - "txy": "mg", - "uig": "ug", - "ukr": "uk", - "urd": "ur", - "urk": "ms", - "uzb": "uz", - "uzn": "uz", - "uzs": "uz", - "ven": "ve", - "vie": "vi", - "vkk": "ms", - "vkt": "ms", - "vol": "vo", - "vro": "et", - "wln": "wa", - "wol": "wo", - "wuu": "zh", - "xho": "xh", - "xmm": "ms", - "xmv": "mg", - "xmw": "mg", - "ydd": "yi", - "yid": "yi", - "yih": "yi", - "yor": "yo", - "yue": "zh", - "zch": "za", - "zeh": "za", - "zgb": "za", - "zgm": "za", - "zgn": "za", - "zha": "za", - "zhd": "za", - "zhn": "za", - "zho": "zh", - "zlj": "za", - "zlm": "ms", - "zln": "za", - "zlq": "za", - "zmi": "ms", - "zqe": "za", - "zsm": "ms", - "zul": "zu", - "zyb": "za", - "zyg": "za", - "zyj": "za", - "zyn": "za", - "zzj": "za" -} - -LANGID_TO_ISO = ISO_639_1_TO_3 # {v: k for k, v in iso639_3_to_1.items()} - -# Source langs: S2ST / S2TT / ASR don't need source lang -# T2TT / T2ST use this -text_source_language_codes = [ - "afr", - "amh", - "arb", - "ary", - "arz", - "asm", - "azj", - "bel", - "ben", - "bos", - "bul", - "cat", - "ceb", - "ces", - "ckb", - "cmn", - "cym", - "dan", - "deu", - "ell", - "eng", - "est", - "eus", - "fin", - "fra", - "gaz", - "gle", - "glg", - "guj", - "heb", - "hin", - "hrv", - "hun", - "hye", - "ibo", - "ind", - "isl", - "ita", - "jav", - "jpn", - "kan", - "kat", - "kaz", - "khk", - "khm", - "kir", - "kor", - "lao", - "lit", - "lug", - "luo", - "lvs", - "mai", - "mal", - "mar", - "mkd", - "mlt", - "mni", - "mya", - "nld", - "nno", - "nob", - "npi", - "nya", - "ory", - "pan", - "pbt", - "pes", - "pol", - "por", - "ron", - "rus", - "slk", - "slv", - "sna", - "snd", - "som", - "spa", - "srp", - "swe", - "swh", - "tam", - "tel", - "tgk", - "tgl", - "tha", - "tur", - "ukr", - "urd", - "uzn", - "vie", - "yor", - "yue", - "zsm", - "zul", -] -TEXT_SOURCE_LANGUAGE_NAMES = sorted([language_code_to_name[code] for code in text_source_language_codes]) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py deleted file mode 100644 index e5650c40673882c9164ddc56fd3ee63af0be730c..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/deform_conv.py +++ /dev/null @@ -1,116 +0,0 @@ -import torch -from torch import nn - -from detectron2.layers import Conv2d - - -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class DFConv2d(nn.Module): - """Deformable convolutional layer""" - def __init__( - self, - in_channels, - out_channels, - with_modulated_dcn=True, - kernel_size=3, - stride=1, - groups=1, - dilation=1, - deformable_groups=1, - bias=False, - padding=None - ): - super(DFConv2d, self).__init__() - if isinstance(kernel_size, (list, tuple)): - assert isinstance(stride, (list, tuple)) - assert isinstance(dilation, (list, tuple)) - assert len(kernel_size) == 2 - assert len(stride) == 2 - assert len(dilation) == 2 - padding = ( - dilation[0] * (kernel_size[0] - 1) // 2, - dilation[1] * (kernel_size[1] - 1) // 2 - ) - offset_base_channels = kernel_size[0] * kernel_size[1] - else: - padding = dilation * (kernel_size - 1) // 2 - offset_base_channels = kernel_size * kernel_size - if with_modulated_dcn: - from detectron2.layers.deform_conv import ModulatedDeformConv - offset_channels = offset_base_channels * 3 # default: 27 - conv_block = ModulatedDeformConv - else: - from detectron2.layers.deform_conv import DeformConv - offset_channels = offset_base_channels * 2 # default: 18 - conv_block = DeformConv - self.offset = Conv2d( - in_channels, - deformable_groups * offset_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - groups=1, - dilation=dilation - ) - nn.init.constant_(self.offset.weight, 0) - nn.init.constant_(self.offset.bias, 0) - ''' - for l in [self.offset, ]: - nn.init.kaiming_uniform_(l.weight, a=1) - torch.nn.init.constant_(l.bias, 0.) - ''' - self.conv = conv_block( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - deformable_groups=deformable_groups, - bias=bias - ) - self.with_modulated_dcn = with_modulated_dcn - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.offset_split = offset_base_channels * deformable_groups * 2 - - def forward(self, x, return_offset=False): - if x.numel() > 0: - if not self.with_modulated_dcn: - offset_mask = self.offset(x) - x = self.conv(x, offset_mask) - else: - offset_mask = self.offset(x) - offset = offset_mask[:, :self.offset_split, :, :] - mask = offset_mask[:, self.offset_split:, :, :].sigmoid() - x = self.conv(x, offset, mask) - if return_offset: - return x, offset_mask - return x - # get output shape - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // d + 1 - for i, p, di, k, d in zip( - x.shape[-2:], - self.padding, - self.dilation, - self.kernel_size, - self.stride - ) - ] - output_shape = [x.shape[0], self.conv.weight.shape[0]] + output_shape - return _NewEmptyTensorOp.apply(x, output_shape) \ No newline at end of file diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/animation.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/animation.js deleted file mode 100644 index 7ce949afc8205c3a9f073e23eeb004730c610425..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/animation.js +++ /dev/null @@ -1,17 +0,0 @@ -let Declaration = require('../declaration') - -class Animation extends Declaration { - /** - * Don’t add prefixes for modern values. - */ - check(decl) { - return !decl.value.split(/\s+/).some(i => { - let lower = i.toLowerCase() - return lower === 'reverse' || lower === 'alternate-reverse' - }) - } -} - -Animation.names = ['animation', 'animation-direction'] - -module.exports = Animation diff --git a/spaces/yuyijiong/quad_match_score/app.py b/spaces/yuyijiong/quad_match_score/app.py deleted file mode 100644 index 75ed5f27747f92313a16f65e73054aa570c6b308..0000000000000000000000000000000000000000 --- a/spaces/yuyijiong/quad_match_score/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - -module = evaluate.load("yuyijiong/quad_match_score") -launch_gradio_widget(module) - -# predictions=["a | b | c | pos"] -# references=["a | b | c | pos & e | f | g | neg"] -# -# module.compute(predictions=predictions, references=references) diff --git a/spaces/zhang-wei-jian/docker/node_modules/debug/src/index.js b/spaces/zhang-wei-jian/docker/node_modules/debug/src/index.js deleted file mode 100644 index bf4c57f259df2e16761b45e2636db307c89ba419..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/debug/src/index.js +++ /dev/null @@ -1,10 +0,0 @@ -/** - * Detect Electron renderer / nwjs process, which is node, but we should - * treat as a browser. - */ - -if (typeof process === 'undefined' || process.type === 'renderer' || process.browser === true || process.__nwjs) { - module.exports = require('./browser.js'); -} else { - module.exports = require('./node.js'); -} diff --git a/spaces/zhang-wei-jian/docker/node_modules/http-assert/README.md b/spaces/zhang-wei-jian/docker/node_modules/http-assert/README.md deleted file mode 100644 index b1f2f8796ad78818e5ee03b7e2521c9dbc4377c7..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/http-assert/README.md +++ /dev/null @@ -1,116 +0,0 @@ -# http-assert - -[![NPM Version][npm-version-image]][npm-url] -[![NPM Downloads][npm-downloads-image]][npm-url] -[![Node.js Version][node-version-image]][node-version-url] -[![Build Status][ci-image]][ci-url] -[![Test Coverage][coveralls-image]][coveralls-url] - -Assert with status codes. Like ctx.throw() in Koa, but with a guard. - -## Install - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```bash -$ npm install http-assert -``` - -## Example -```js -var assert = require('http-assert') -var ok = require('assert') - -var username = 'foobar' // username from request - -try { - assert(username === 'fjodor', 401, 'authentication failed') -} catch (err) { - ok(err.status === 401) - ok(err.message === 'authentication failed') - ok(err.expose) -} -``` - -## API - -The API of this module is intended to be similar to the -[Node.js `assert` module](https://nodejs.org/dist/latest/docs/api/assert.html). - -Each function will throw an instance of `HttpError` from -[the `http-errors` module](https://www.npmjs.com/package/http-errors) -when the assertion fails. - -### assert(value, [status], [message], [properties]) - -Tests if `value` is truthy. If `value` is not truthy, an `HttpError` -is thrown that is constructed with the given `status`, `message`, -and `properties`. - -### assert.deepEqual(a, b, [status], [message], [properties]) - -Tests for deep equality between `a` and `b`. Primitive values are -compared with the Abstract Equality Comparison (`==`). If `a` and `b` -are not equal, an `HttpError` is thrown that is constructed with the -given `status`, `message`, and `properties`. - -### assert.equal(a, b, [status], [message], [properties]) - -Tests shallow, coercive equality between `a` and `b` using the Abstract -Equality Comparison (`==`). If `a` and `b` are not equal, an `HttpError` -is thrown that is constructed with the given `status`, `message`, -and `properties`. - -### assert.fail([status], [message], [properties]) - -Always throws an `HttpError` that is constructed with the given `status`, -`message`, and `properties`. - -### assert.notDeepEqual(a, b, [status], [message], [properties]) - -Tests for deep equality between `a` and `b`. Primitive values are -compared with the Abstract Equality Comparison (`==`). If `a` and `b` -are equal, an `HttpError` is thrown that is constructed with the given -`status`, `message`, and `properties`. - -### assert.notEqual(a, b, [status], [message], [properties]) - -Tests shallow, coercive equality between `a` and `b` using the Abstract -Equality Comparison (`==`). If `a` and `b` are equal, an `HttpError` is -thrown that is constructed with the given `status`, `message`, and -`properties`. - -### assert.notStrictEqual(a, b, [status], [message], [properties]) - -Tests strict equality between `a` and `b` as determined by the SameValue -Comparison (`===`). If `a` and `b` are equal, an `HttpError` is thrown -that is constructed with the given `status`, `message`, and `properties`. - -### assert.ok(value, [status], [message], [properties]) - -Tests if `value` is truthy. If `value` is not truthy, an `HttpError` -is thrown that is constructed with the given `status`, `message`, -and `properties`. - -### assert.strictEqual(a, b, [status], [message], [properties]) - -Tests strict equality between `a` and `b` as determined by the SameValue -Comparison (`===`). If `a` and `b` are not equal, an `HttpError` -is thrown that is constructed with the given `status`, `message`, -and `properties`. - -## Licence - -[MIT](LICENSE) - -[ci-image]: https://badgen.net/github/checks/jshttp/http-assert/master?label=ci -[ci-url]: https://github.com/jshttp/http-assert/actions?query=workflow%3Aci -[coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/http-assert/master -[coveralls-url]: https://coveralls.io/r/jshttp/http-assert?branch=master -[node-version-image]: https://badgen.net/npm/node/http-assert -[node-version-url]: https://nodejs.org/en/download -[npm-downloads-image]: https://badgen.net/npm/dm/http-assert -[npm-url]: https://npmjs.org/package/http-assert -[npm-version-image]: https://badgen.net/npm/v/http-assert diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/evaluation/__init__.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/evaluation/__init__.py deleted file mode 100644 index 1bf9d8dfba501e83ea5738ff98228c5756949a47..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/evaluation/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -@date: 2021/6/29 -@description: -""" diff --git a/spaces/zideliu/styledrop/timm/data/loader.py b/spaces/zideliu/styledrop/timm/data/loader.py deleted file mode 100644 index 317f77df8a9f18d47058a1beca471c9a0d886dab..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/data/loader.py +++ /dev/null @@ -1,257 +0,0 @@ -""" Loader Factory, Fast Collate, CUDA Prefetcher - -Prefetcher and Fast Collate inspired by NVIDIA APEX example at -https://github.com/NVIDIA/apex/commit/d5e2bb4bdeedd27b1dfaf5bb2b24d6c000dee9be#diff-cf86c282ff7fba81fad27a559379d5bf - -Hacked together by / Copyright 2020 Ross Wightman -""" - -import torch.utils.data -import numpy as np - -from .transforms_factory import create_transform -from .constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .distributed_sampler import OrderedDistributedSampler -from .random_erasing import RandomErasing -from .mixup import FastCollateMixup - - -def fast_collate(batch): - """ A fast collation function optimized for uint8 images (np array or torch) and int64 targets (labels)""" - assert isinstance(batch[0], tuple) - batch_size = len(batch) - if isinstance(batch[0][0], tuple): - # This branch 'deinterleaves' and flattens tuples of input tensors into one tensor ordered by position - # such that all tuple of position n will end up in a torch.split(tensor, batch_size) in nth position - inner_tuple_size = len(batch[0][0]) - flattened_batch_size = batch_size * inner_tuple_size - targets = torch.zeros(flattened_batch_size, dtype=torch.int64) - tensor = torch.zeros((flattened_batch_size, *batch[0][0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - assert len(batch[i][0]) == inner_tuple_size # all input tensor tuples must be same length - for j in range(inner_tuple_size): - targets[i + j * batch_size] = batch[i][1] - tensor[i + j * batch_size] += torch.from_numpy(batch[i][0][j]) - return tensor, targets - elif isinstance(batch[0][0], np.ndarray): - targets = torch.tensor([b[1] for b in batch], dtype=torch.int64) - assert len(targets) == batch_size - tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - tensor[i] += torch.from_numpy(batch[i][0]) - return tensor, targets - elif isinstance(batch[0][0], torch.Tensor): - targets = torch.tensor([b[1] for b in batch], dtype=torch.int64) - assert len(targets) == batch_size - tensor = torch.zeros((batch_size, *batch[0][0].shape), dtype=torch.uint8) - for i in range(batch_size): - tensor[i].copy_(batch[i][0]) - return tensor, targets - else: - assert False - - -class PrefetchLoader: - - def __init__(self, - loader, - mean=IMAGENET_DEFAULT_MEAN, - std=IMAGENET_DEFAULT_STD, - fp16=False, - re_prob=0., - re_mode='const', - re_count=1, - re_num_splits=0): - self.loader = loader - self.mean = torch.tensor([x * 255 for x in mean]).cuda().view(1, 3, 1, 1) - self.std = torch.tensor([x * 255 for x in std]).cuda().view(1, 3, 1, 1) - self.fp16 = fp16 - if fp16: - self.mean = self.mean.half() - self.std = self.std.half() - if re_prob > 0.: - self.random_erasing = RandomErasing( - probability=re_prob, mode=re_mode, max_count=re_count, num_splits=re_num_splits) - else: - self.random_erasing = None - - def __iter__(self): - stream = torch.cuda.Stream() - first = True - - for next_input, next_target in self.loader: - with torch.cuda.stream(stream): - next_input = next_input.cuda(non_blocking=True) - next_target = next_target.cuda(non_blocking=True) - if self.fp16: - next_input = next_input.half().sub_(self.mean).div_(self.std) - else: - next_input = next_input.float().sub_(self.mean).div_(self.std) - if self.random_erasing is not None: - next_input = self.random_erasing(next_input) - - if not first: - yield input, target - else: - first = False - - torch.cuda.current_stream().wait_stream(stream) - input = next_input - target = next_target - - yield input, target - - def __len__(self): - return len(self.loader) - - @property - def sampler(self): - return self.loader.sampler - - @property - def dataset(self): - return self.loader.dataset - - @property - def mixup_enabled(self): - if isinstance(self.loader.collate_fn, FastCollateMixup): - return self.loader.collate_fn.mixup_enabled - else: - return False - - @mixup_enabled.setter - def mixup_enabled(self, x): - if isinstance(self.loader.collate_fn, FastCollateMixup): - self.loader.collate_fn.mixup_enabled = x - - -def create_loader( - dataset, - input_size, - batch_size, - is_training=False, - use_prefetcher=True, - no_aug=False, - re_prob=0., - re_mode='const', - re_count=1, - re_split=False, - scale=None, - ratio=None, - hflip=0.5, - vflip=0., - color_jitter=0.4, - auto_augment=None, - num_aug_splits=0, - interpolation='bilinear', - mean=IMAGENET_DEFAULT_MEAN, - std=IMAGENET_DEFAULT_STD, - num_workers=1, - distributed=False, - crop_pct=None, - collate_fn=None, - pin_memory=False, - fp16=False, - tf_preprocessing=False, - use_multi_epochs_loader=False -): - re_num_splits = 0 - if re_split: - # apply RE to second half of batch if no aug split otherwise line up with aug split - re_num_splits = num_aug_splits or 2 - dataset.transform = create_transform( - input_size, - is_training=is_training, - use_prefetcher=use_prefetcher, - no_aug=no_aug, - scale=scale, - ratio=ratio, - hflip=hflip, - vflip=vflip, - color_jitter=color_jitter, - auto_augment=auto_augment, - interpolation=interpolation, - mean=mean, - std=std, - crop_pct=crop_pct, - tf_preprocessing=tf_preprocessing, - re_prob=re_prob, - re_mode=re_mode, - re_count=re_count, - re_num_splits=re_num_splits, - separate=num_aug_splits > 0, - ) - - sampler = None - if distributed: - if is_training: - sampler = torch.utils.data.distributed.DistributedSampler(dataset) - else: - # This will add extra duplicate entries to result in equal num - # of samples per-process, will slightly alter validation results - sampler = OrderedDistributedSampler(dataset) - - if collate_fn is None: - collate_fn = fast_collate if use_prefetcher else torch.utils.data.dataloader.default_collate - - loader_class = torch.utils.data.DataLoader - - if use_multi_epochs_loader: - loader_class = MultiEpochsDataLoader - - loader = loader_class( - dataset, - batch_size=batch_size, - shuffle=sampler is None and is_training, - num_workers=num_workers, - sampler=sampler, - collate_fn=collate_fn, - pin_memory=pin_memory, - drop_last=is_training, - ) - if use_prefetcher: - prefetch_re_prob = re_prob if is_training and not no_aug else 0. - loader = PrefetchLoader( - loader, - mean=mean, - std=std, - fp16=fp16, - re_prob=prefetch_re_prob, - re_mode=re_mode, - re_count=re_count, - re_num_splits=re_num_splits - ) - - return loader - - -class MultiEpochsDataLoader(torch.utils.data.DataLoader): - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._DataLoader__initialized = False - self.batch_sampler = _RepeatSampler(self.batch_sampler) - self._DataLoader__initialized = True - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for i in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler(object): - """ Sampler that repeats forever. - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) diff --git a/spaces/zxy666/bingo-chatai666/src/components/learn-more.tsx b/spaces/zxy666/bingo-chatai666/src/components/learn-more.tsx deleted file mode 100644 index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/components/learn-more.tsx +++ /dev/null @@ -1,39 +0,0 @@ -import React from 'react' -import { SourceAttribution } from '@/lib/bots/bing/types' - -export interface LearnMoreProps { - sourceAttributions?: SourceAttribution[] -} - -export function LearnMore({ sourceAttributions }: LearnMoreProps) { - if (!sourceAttributions?.length) { - return null - } - - return ( -
                    -
                    了解详细信息:
                    -
                    -
                    - {sourceAttributions.map((attribution, index) => { - const { providerDisplayName, seeMoreUrl } = attribution - const { host } = new URL(seeMoreUrl) - return ( - - {index + 1}. {host} - - ) - })} -
                    -
                    -
                    - ) -}